This post is about a simple tool called miniONE, which allows you to easily install OpenNebula on a single host from the freshly deployed system to the ready-to-use OpenNebula installation just by a single command.

Let’s say that you just want to check out how OpenNebula looks like when starting evaluation or you want to see if something particular is done in v5.6. This might be the case when you can use miniONE.

So, just get it:

$ wget https://github.com/OpenNebula/minione/raw/v5.6.0/minione

and run it:

$ sudo bash minione

At first, there needs to be some checks done. You can see all of them by running with –verbose.

$ sudo bash minione --verbose

### Checks & detection
Checking distribution and version [CentOS 7] OK
Checking cpu virtualization capabilities OK
Check free disk space OK
Using local interface [ens3] OK
Checking directories from previous installation OK
Checking user from previous installation OK
Checking sshd service is running OK
Checking bridge-utils are installed SKIP will try to install
Checking minionebr interface is not present OK
Check given VN 172.16.100.0/24 is not routed OK
Checking SELinux OK
Checking for present ssh key SKIP
Generating ssh keypair in /root/.ssh/id_rsa OK
Checking presence of the market app: "CentOS 7 - KVM" OK

Mainly you need to run it on a supported system — Centos 7 and recently Ubuntus so far. Then, you need CPU capable to perform virtualization, some free space to allocate the images and virtual machines itself, etc.

It may happen that you hit some non critical check to fail

### Checks & detection
Checking directories from previous installation FAILED

But you might try to force it using -f.

$ sudo bash minione -f

### Checks & detection
Checking directories from previous installation IGNORED will be deleted

Once you get through that, you may start the installation.

### Main deployment steps:
Purge previous installation
Configure bridge minionebr with IP 172.16.100.1/24
Enable NAT over ens3
Using ssh public key /root/.ssh/id_rsa.pub
Install OpenNebula version 5.6

Do you agree? [yes/no]:

### Installation
Install bridge-utils OK
Creating bridge interface minionebr OK
Restarting network OK
Enabling ipv4 forward OK
Configuring nat using iptables OK
Saving iptables changes OK
Installing DNSMasq OK
Starting DNSMasq OK
Configuring repositories OK
Installing epel OK
Installing OpenNebula packages OK
Installing ruby gems OK
Installing OpenNebula node packages OK

### Configuration
Switching onegate endpoint in oned.conf OK
Switching scheduler interval to 10sec OK
Setting initial password for current user and oneadmin OK
Starting opennebula services OK
Checking OpenNebula is working OK
Disabling ssh from virtual network OK
Adding localhost ssh key to known_hosts OK
Testing ssh connection to localhost OK
Add ssh key to oneadmin user OK
Updating datastores, TM_MAD=qcow2, SHARED=yes OK
Creating host OK
Creating virtual network OK
Exporting [CentOS 7 – KVM] from marketplace to local datastore OK
Updating template OK

What is happening? Apart from the installation itself, which simply adds the repositories and installs the OpenNebula packages, some configuration changes must be done. Above all, the networking needs be prepared to somehow allow you to connect to the virtual machines later.

For that purpose the bridge interface is created with dedicated network segment and NAT is configured on the installing host. Also, DNS server (DNSMasq) is started for the virtual machines.

miniONE comes with default parameter values for most cases. See them all in the Help:

$ bash minione --help
-h --help                           List of supported arguments
--version [5.6]                     Specify OpenNebula version
-f --force                          Skip non-fatal validation errors
                                    (e.g., traces of existing inst.)
-v --verbose                        Be verbose
--yes                               Don't ask
--password [random generated]       Initial password for oneadmin
--ssh-pubkey [~/.ssh/id_rsa.pub]    User ssh public key
--bridge-interface [minionebr]      Bridge interface for private networking
--nat-interface [first net device]  Interface to configure for NAT
--vnet-address [172.16.100.0]       Virtual Network address
--vnet-netmask [255.255.255.0]      Virtual Network netmask
--vnet-gateway [172.16.100.1]       Virtual Network gateway (i.e. bridge IP)
--vnet-ar-ip-start [172.16.100.1]   Virtual Network AR start IP
--vnet-ar-ip-count [100]            Virtual Network AR size
--marketapp-name [CentOS 7 - KVM]   Name of Marketplace appliance to import
--vm-password [opennebula]          Root password for virtual machine 

Before the installation finishes, it also bootstraps the OpenNebula to be ready to use. At first it enables KVM hypervisor on localhost and downloads one appliance from the market place. So, once that is complete, you may easily login using the printed credentials:

### Report
OpenNebula 5.6 was installed
Sunstone (the webui) is runninng on:
  http://192.168.100.101:9869/
Use following to login:
  user: oneadmin
  password: o6ARsMAdGe

And that’s it! It won’t take us to Mars, but it might be handy, nonetheless.

Just a few minutes of your time…

As we continue to focus on improvements of OpenNebula, we need direction from you, the User Community.

Please take a few minutes to fill out this OpenNebula Survey 2018 – to help us understand how you are using OpenNebula, and what you need going forward.  All information collected is confidential, and will not be shared.

Many, many thanks!

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

There are a lot of OpenNebula features currently being worked on that deserve some attention:

  • We are working on an upcoming feature with the aim to simplify the management of VM templates that can be deployed on multiple clusters – by creating an automated selection process for the VM networks.  Check out the recent post.
  • Migrating workloads both to and from KVM to VMware hypervisors will soon be as simple as bread and butter for breakfast. Check out the recent post.
  • We are also working on a “self-provisioning” method for Virtual Networks.  No longer will Virtual Networks be created only by cloud administrators, but rather, end-users can be given the ability to make changes at the logic level, like changes to IP ranges, to the DNS server, etc.

Keep an eye out for the upcoming version 5.8!

Community

November has been an exciting month for the User Community.

Outreach

  • We currently have our 2019 OpenNebula TechDay Call for Hosts open.  Take a look at your calendars, and think about planning a TechDay of your own!
  • We want your feedback!!
    • In the coming weeks, we are going to be sending out an OpenNebula survey, with the intention of learning a bit about how you are using OpenNebula.  Please plan to take the time to fill it out, as its purpose is for us to be able to serve you better!
    • If you attended the OpenNebulaConf in Amsterdam, and you haven’t submitted your feedback survey, please do so, and let us know what you think!

Let’s welcome in December!

Stay Connected!

It’s been a few weeks now since the 2018 OpenNebula Conference in Amsterdam.  It was great to see so many members of the User Community, enthusiastic to learn and share insights around OpenNebula and the current technology landscape.  We give a huge thanks to the great lineup of speakers who presented, as well as to the sponsoring organizations that helped to make the conference a success!

Here are the materials from the conference, available for you to review at your leisure:

Take some time to review the material, think about how it may help with your environment or your proposed solution, and reach back out to the community if you have questions or suggestions. We’d love your feedback!

Additionally, you will have seen our recent 2019 OpenNebula TechDay “Call for Hosts”.
Think about hosting one of your own!

Stay connected!

We are opening the Call for Hosts for the OpenNebula TechDays in 2019!

Why don’t you host an OpenNebula TechDay of your own?

The OpenNebula Cloud TechDays are day-long educational and networking events to learn about OpenNebula.  Join our technical experts from OpenNebula Systems for a one-day, hands-on workshop on cloud installation and operation.  You’ll get a comprehensive overview of OpenNebula and will be equipped with the skills and insight to take back to your company and implement right away.

OpenNebula TechDays started in March 2014 and we’ve already celebrated over 30 different TechDays in the Netherlands, Belgium, Spain, United States, Romania, Czech Republic, France, Canada, Malaysia, Bulgaria, Germany and Ireland. They have been hosted by organizations like:

  • BestBuy
  • Telefonica
  • BIT.nl
  • Transunion
  • Hitachi
  • Microsoft
  • BlackBerry
  • Harvard University
  • Netways
  • and many others

Think about hosting a Cloud TechDay – we would love to work with you.  We only require that you provide a room with enough capacity for the attendees and some essential materials (WiFi, projector, etc…).

Go to the  TechDay Guidelines and Registration Form

The deadline for this call is December 11, 2018.  We look forward to hearing from you!

At OpenNebula Systems, we are working on an upcoming feature with the specific aim to simplify the management of VM templates that can be deployed on multiple clusters. When a VM template refers to disk images on datastores shared across different clusters, the VM template can be allocated in any of them. This also requires that the clusters share the network dependencies of the VM template, which may not always be the desired design.

In order to overcome this problem, this feature will implement an automatic selection process for the virtual networks of a VM template. The actual virtual network used by the VM will be selected among those available in the selected cluster using a similar algorithm to the one used to select system datastores or hosts. In this way, the very same VM template will be deployable on multiple clusters without requiring shared networks or any modification.

Quick Video: VM Templates – Automatic Network selection

Anticipated Changes on the CLI and XML-RPC API

The VM template includes a list of network interface controllers (NIC) attached to a virtual network. The definition of the NIC has been extended to include a new selection mode (automatic).

To create a new VM from CLI you can type a command like this:

onevm create --name <name> --cpu <cpu> --memory <memory> --nic auto

This command will create a VM with this NIC:

<NIC>
<NETWORK_MODE><![CDATA[auto]]></NETWORK_MODE>
<NIC_ID><![CDATA[0]]></NIC_ID>
</NIC>

The network selection mode is set by the new attribute NETWORK_MODE, which can be set to either MANUAL (current selection method) or AUTO. The new attribute is optional, and if not changed, it defaults to MANUAL. This way, existing templates do not need to be upgraded as the current interface is preserved.

The API call one.vm.deploy will accept a new template document as an extra parameter. And this new parameter will include the selected networks for those NICs using the automatic selection process.

Version

VM template

one.template.instantiate

one.vm.deploy

Current

NIC = [

 NETWORK = pub

]

NIC = [

AR_ID = 0,

BRIDGE = br0,

CLUSTER_ID = 3,

IP = 10.0.0.4,

NETWORK = pub

NETWORK_ID = 2,

NIC_ID = 3,

SECURITY_GROUPS = 0,

TARGET = one-6-7-0,

VN_MAD = bridge

]

NIC = [

AR_ID = 0,

BRIDGE = br0,

CLUSTER_ID = 3,

IP = 10.0.0.4,

NETWORK = pub

NETWORK_ID = 2,

NIC_ID = 3,

SECURITY_GROUPS = 0,

TARGET = one-6-7-0,

VN_MAD = bridge

]

New Feature

NIC = [

 NETWORK_MODE = auto

]

NIC = [

 NIC_ID = 3,

 NETWORK_MODE = auto

]

Sunstone

From the Template section, we can define the NICs we want set to AUTO and those we do not.

Also, we can change a NIC from the Instantiate section.

You can learn how to derive your own network selection policies in the Scheduler guide.

The plan is for this capability to be included in the upcoming version 5.8, and we will likely backport it to version 5.6.x, as well.  If you have any questions or feedback, send us your input – either on our Developers’ Forum or leave a comment below.

 

OpenNebula allows the management of hybrid environments, offering end-users a self-service portal to consume resources from both VMware-based infrastructures and KVM based ones in a transparent way to the user.

In order to smooth the way of migrating workloads to and from KVM to VMware, OpenNebula is working on an awesome feature for its next release (5.8, and probably backported to 5.6.x).

Here’s a quick review of the complete flow of a translation from vmdk to qcow2:

Export image to OpenNebula Marketplace

Suppose you have an vmdk image and you want to use it in KVM. At this point, we can have different situations.

  • We have the image in vmdk datastore. We would need to upload the image to a marketplace.
  • We want to use an image provided by OpenNebula marketplace. In this case, we have to do nothing.

Import to vmdk

Now we know what image we want to use so, we have to download the image to the datastore where we want to store the image. Once the image is downloaded, we are ready to use it!

Add it to a VM Template

Every image downloaded from a MarketPlace, OpenNebula creates a template in order to use it easily when the destination is vcenter, if not we have to create a template with the image. Now we can update the template with hosts, more images, network etc.

Implementation

From an implementation perspective:

  • Every time we download an image, it is downloaded to the frontend.
  • Then, it is converted to the proper type with the “qemu-img convert” command.
  • Finally, OpenNebula will copy the image to the datastore.

From vmdk to qcow2:  We only take into account that we have to set the bus driver.

From qcow2 to vmdk:  Here the limitation is when we convert an image to vmdk, we lose the contextualization. What we have to do is install VMware tools.

Let’s walk through an example

Imagine that we have a qcow2 image within the OpenNebula MarketPlace and we want to use it in vCenter.

1.)  We select the image:

2.)  Check that the image has qcow2/raw image, then click on the download button.

3.)  Now, we select the vcenter destination datastore.

4.)  As we are going to export the image to a vcenter datastore, we should have a void template in order to attach the new image.

5.)  Now we are ready to instantiate the template and we will see the vm in vcenter.

6.)  And now, we have VNC working.

As I mentioned, the plan is for this capability to be included in the upcoming version 5.8, and we will likely backport it to version 5.6.x, as well.  If you have any questions or feedback, send us your input – either on our Developers’ Forum or leave a comment below.

Universities are nowadays facing the challenge of adapting themselves to a new generation of students, the so-called Millennial generation, for whom technological devices are essential tools to carry out their daily tasks.

At the Université Catholique de Louvain, we wanted to be aligned with the new technological trends and thus be able to embrace BYOD in order to give our students the possibility to use their own devices to access the same software as if they were in the standard computer classrooms from anywhere and anytime.

VDI was the key to meet our needs, and we were clear that we wanted the OpenNebula orchestrator to be the cornerstone of our virtual desktop infrastructure. The UDS Enterprise VDI & vApp connection broker, which is fully compatible with OpenNebula, was the missing component.

Thanks to OpenNebula + UDS Enterprise VDI joint solution, we are progressively giving our 37,000 students access to Windows or Linux standard environments from outside the classrooms. Teachers now have a teaching environment independent from their own computer and researchers can access software on demand and with better calculation performance, thanks to remote applications.

All with a high availability VDI infrastructure, very easy to access and manage, and with lower costs for students and the IT Department.

If you’d like to know how our VDI infrastructure was built, the different components used and their role in the platform, and how the IT staff deploys and manages the virtual desktops, don’t miss our talk in the next OpenNebula Conf 2018 in Amsterdam: “UCLouvain Case Study – VDI for 37,000 students with OpenNebula.

There, we will also explain how we are extending the use of OpenNebula to remote applications and what high availability infrastructure we are now implementing to guarantee a 24/7 available service.

Hope to see you next Tuesday, November 13 in Amsterdam!

Infrastructure as Code (IaC) is changing the way that we’re doing things. Some people think that it’s the motorway that we have to follow and be aligned with business, as a resume they want us to be agile.

The arrival of tools such as Ansible, Puppet, SaltStack, and Chef, have enabled sysadmins to maintain modular, automatable infrastructure. This time I would like to introduce the Terraform tool.

Terraform is a provisioning declarative tool that is based on the Infrastructure as Code paradigm. Terraform is a multipurpose composition tool: it composes multiple tiers (SaaS/PaaS/IaaS).

Terraform is not a cloud agnostic tool, but in combination with OpenNebula, it can be amazing. By taking advantage of the template concept it will allow us to deploy vm’s agnostically in different cloud providers, such as AWS, Azure or on premise cloud infrastructure.

From the OpenNebula community we can observe several Terraform providers that have been developed. The first example is the project started by the Runtastic team that has recently been enhanced by Blackberry.

After this little introduction about Terraform, let’s go with a tutorial where a PaaS Rancher platform is deployed in an automated way with Terraform and RKE.

Deploy Rancher HA in OpenNebula with Terraform and RKE

Install Terraform

To install Terraform, find the appropriate package for your system and download it

$ curl -O https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip

After downloading Terraform, unzip the package

$ sudo mkdir /bin/terraform
$ sudo unzip terraform_0.11.10_linux_amd64.zip -d /bin/terraform

After installing Terraform, verify the installation worked by opening a new terminal session and checking that Terraform is available.

$ export PATH=$PATH:/bin/terraform
$ terraform --version
Add Terraform providers for Opennebula and RKE

You need to install go first: https://golang.org/doc/install

After go is installed and set up, just type:

$ go get github.com/blackberry/terraform-provider-opennebula
$ go install github.com/blackberry/terraform-provider-opennebula 

Copy your terraform-provider-opennebula binary in a folder, like /usr/local/bin, and write this in ~/.terraformrc:

providers {
opennebula = "/usr/local/bin/terraform-provider-opennebula"
}

providers {
rke = "/usr/local/bin/terraform-provider-rke"
}

For RKE provider, download the binary and copy in the same folder:

$ wget https://github.com/yamamoto-febc/terraform-provider-rke/releases/download/0.5.0/terraform-provider-rke_0.5.0_linux-amd64.zip 
$ sudo unzip terraform-provider-rke_0.5.0_linux-amd64.zip -d /usr/local/bin/terraform-provider-rke

Install Rancher

Clone this repo:
$ git clone https://github.com/CSUC/terraform-rke-paas.git
Create infrastructure

First we have to initialize Terraform simply with:

$ terraform init

We let Terraform create a plan, which we can review:

$ terraform plan

The plan command lets you see what Terraform will do before actually doing it.

Now we execute:

$ terraform apply

That’s it – you should have a functional Rancher server:

Now, you can install the Docker Machine OpenNebula Driver and deploy new Kubernetes clusters in your Rancher platform:

The complete tutorial is available at Github:

https://github.com/CSUC/terraform-rke-paas

If you are interested in more details, don’t miss the talk: Hybrid Clouds: Dancing with “Automated” Virtual Machines in the next OpenNebula Conf 2018 in Amsterdam.

See you cloudadmins!

Barcelona UserGroup Team –  www.cloudadmins.org

 

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

This month the team released vOneCloud version 3.2.1, which is based on OpenNebula 5.6.1, and as such, it includes all the bug fixes and functionalities introduced in 5.6.1

A few examples include:

  • Order of elements in list API calls can be selected (ascending or descending).
  • XMLRPC calls can report the client IP and PORT.
  • New quotas for VMS allow you to configure limits for VMs “running”.
  • The Virtual Machines that are associated to a Virtual Router have all actions allowed except nic-attach/dettach.

For more details of what is included in vOneCloud 3.2.1, check the Release Notes.

There are several other items “in the oven”, getting ready for release.  Here are a few:

  • We’ve been working on making Virtual Network self-provisioning easier, by allowing end-users to create their own networks from pre-defined network templates.
  • We are making continued progress on the LXD drivers, and getting them in shape for version 5.8.
  • Very soon vCenter users will be able to download any appliance from the Marketplace. OpenNebula datastore drivers will take care of any image conversion required.
  • Lastly, King has “Funded a Feature” through our FaF program – this one allowing Virtual Machines to define automatic network selection for their NICs. The scheduler will pick the best Virtual Network at deployment time. This will simplify the VM Template management, as it will reduce the overall number of templates needed.

Community

Across the OpenNebula User Community, we continue to see interesting and important conversation and discussion.

Leboncoin posted a thorough overview on their own blog of their infrastructure needs – including High Availability and Production stability – along with details on how their choice of using OpenNebula has helped them to successfully build and manage their own IaaS environment.  Take a moment to read the article.

“[The] OpenNebula community is also particularly active and new features are coming out regularly.”

Nodeweaver posted that they have an update being prepared to allow for single-click deployment of their Terraformer VM which integrates Ansible and Terraform to manage OpenNebula resources.  Along with this, in the near future, expect to see the VM available for use in the OpenNebula Marketplace.

LINBIT has been working on a new storage driver that integrates LINSTOR with OpenNebula. Some of the features are:

  • Deploy disk images to storage nodes automatically, or on selected nodes.
  • Attach images to hosts over the network.
  • Easy deployment of highly available images.
  • Allows live migration of VMs, even using the ssh system datastore transfer manager

LINBIT will be leading the effort to create this as an OpenNebula Add-On, so “keep your eyes peeled”.

Though more generic, here’s a quick reference to an interesting article about the ongoing transformation of Data Centers.

And lastly, as we communicated late in September, OpenNebula released an initial prototype, (with source code and packages available at the OpenNebula GitHub), for Host Provisioning.  We continue to develop this set of capabilities. In the meantime, we published a Blog post reviewing a practical exercise we carried out using this Host Provisioning feature to demonstrate its value as the case for Edge Computing continues to grow.

Outreach

October is a long month, and we have been busy, but mostly with our sights set on November! We’ve put in a great amount of planning into this year’s OpenNebula Conference 2018, which is scheduled for November 12-13 in Amsterdam.  We have a great agenda of speakers lined up, with Hands-on tutorials, and plenty of opportunity to network, share, and discuss with experts and practitioners in the cloud community.  We look forward to seeing many of you in Amsterdam. For those who cannot attend, we will be providing updates, presentation documents, and videos from the #OpenNebulaConf.

In addition, OpenNebula will be attending the VMworld Europe Conference in Barcelona from November 5-8.  We’ll be there ready to showcase OpenNebula’s integration with VMware Cloud on AWS, along with the new features of both OpenNebula 5.6 and vOneCloud 3.2.1.  Be sure to swing by booth #E422!

Soon enough, 2019 will be here.  Start thinking about hosting an OpenNebula TechDay!

We’ll see you in November! Stay Connected!