Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

There are a lot of OpenNebula features currently being worked on that deserve some attention:

  • We are working on an upcoming feature with the aim to simplify the management of VM templates that can be deployed on multiple clusters – by creating an automated selection process for the VM networks.  Check out the recent post.
  • Migrating workloads both to and from KVM to VMware hypervisors will soon be as simple as bread and butter for breakfast. Check out the recent post.
  • We are also working on a “self-provisioning” method for Virtual Networks.  No longer will Virtual Networks be created only by cloud administrators, but rather, end-users can be given the ability to make changes at the logic level, like changes to IP ranges, to the DNS server, etc.

Keep an eye out for the upcoming version 5.8!

Community

November has been an exciting month for the User Community.

Outreach

  • We currently have our 2019 OpenNebula TechDay Call for Hosts open.  Take a look at your calendars, and think about planning a TechDay of your own!
  • We want your feedback!!
    • In the coming weeks, we are going to be sending out an OpenNebula survey, with the intention of learning a bit about how you are using OpenNebula.  Please plan to take the time to fill it out, as its purpose is for us to be able to serve you better!
    • If you attended the OpenNebulaConf in Amsterdam, and you haven’t submitted your feedback survey, please do so, and let us know what you think!

Let’s welcome in December!

Stay Connected!

It’s been a few weeks now since the 2018 OpenNebula Conference in Amsterdam.  It was great to see so many members of the User Community, enthusiastic to learn and share insights around OpenNebula and the current technology landscape.  We give a huge thanks to the great lineup of speakers who presented, as well as to the sponsoring organizations that helped to make the conference a success!

Here are the materials from the conference, available for you to review at your leisure:

Take some time to review the material, think about how it may help with your environment or your proposed solution, and reach back out to the community if you have questions or suggestions. We’d love your feedback!

Additionally, you will have seen our recent 2019 OpenNebula TechDay “Call for Hosts”.
Think about hosting one of your own!

Stay connected!

We are opening the Call for Hosts for the OpenNebula TechDays in 2019!

Why don’t you host an OpenNebula TechDay of your own?

The OpenNebula Cloud TechDays are day-long educational and networking events to learn about OpenNebula.  Join our technical experts from OpenNebula Systems for a one-day, hands-on workshop on cloud installation and operation.  You’ll get a comprehensive overview of OpenNebula and will be equipped with the skills and insight to take back to your company and implement right away.

OpenNebula TechDays started in March 2014 and we’ve already celebrated over 30 different TechDays in the Netherlands, Belgium, Spain, United States, Romania, Czech Republic, France, Canada, Malaysia, Bulgaria, Germany and Ireland. They have been hosted by organizations like:

  • BestBuy
  • Telefonica
  • BIT.nl
  • Transunion
  • Hitachi
  • Microsoft
  • BlackBerry
  • Harvard University
  • Netways
  • and many others

Think about hosting a Cloud TechDay – we would love to work with you.  We only require that you provide a room with enough capacity for the attendees and some essential materials (WiFi, projector, etc…).

Go to the  TechDay Guidelines and Registration Form

The deadline for this call is December 11, 2018.  We look forward to hearing from you!

At OpenNebula Systems, we are working on an upcoming feature with the specific aim to simplify the management of VM templates that can be deployed on multiple clusters. When a VM template refers to disk images on datastores shared across different clusters, the VM template can be allocated in any of them. This also requires that the clusters share the network dependencies of the VM template, which may not always be the desired design.

In order to overcome this problem, this feature will implement an automatic selection process for the virtual networks of a VM template. The actual virtual network used by the VM will be selected among those available in the selected cluster using a similar algorithm to the one used to select system datastores or hosts. In this way, the very same VM template will be deployable on multiple clusters without requiring shared networks or any modification.

Quick Video: VM Templates – Automatic Network selection

Anticipated Changes on the CLI and XML-RPC API

The VM template includes a list of network interface controllers (NIC) attached to a virtual network. The definition of the NIC has been extended to include a new selection mode (automatic).

To create a new VM from CLI you can type a command like this:

onevm create --name <name> --cpu <cpu> --memory <memory> --nic auto

This command will create a VM with this NIC:

<NIC>
<NETWORK_MODE><![CDATA[auto]]></NETWORK_MODE>
<NIC_ID><![CDATA[0]]></NIC_ID>
</NIC>

The network selection mode is set by the new attribute NETWORK_MODE, which can be set to either MANUAL (current selection method) or AUTO. The new attribute is optional, and if not changed, it defaults to MANUAL. This way, existing templates do not need to be upgraded as the current interface is preserved.

The API call one.vm.deploy will accept a new template document as an extra parameter. And this new parameter will include the selected networks for those NICs using the automatic selection process.

Version

VM template

one.template.instantiate

one.vm.deploy

Current

NIC = [

 NETWORK = pub

]

NIC = [

AR_ID = 0,

BRIDGE = br0,

CLUSTER_ID = 3,

IP = 10.0.0.4,

NETWORK = pub

NETWORK_ID = 2,

NIC_ID = 3,

SECURITY_GROUPS = 0,

TARGET = one-6-7-0,

VN_MAD = bridge

]

NIC = [

AR_ID = 0,

BRIDGE = br0,

CLUSTER_ID = 3,

IP = 10.0.0.4,

NETWORK = pub

NETWORK_ID = 2,

NIC_ID = 3,

SECURITY_GROUPS = 0,

TARGET = one-6-7-0,

VN_MAD = bridge

]

New Feature

NIC = [

 NETWORK_MODE = auto

]

NIC = [

 NIC_ID = 3,

 NETWORK_MODE = auto

]

Sunstone

From the Template section, we can define the NICs we want set to AUTO and those we do not.

Also, we can change a NIC from the Instantiate section.

You can learn how to derive your own network selection policies in the Scheduler guide.

The plan is for this capability to be included in the upcoming version 5.8, and we will likely backport it to version 5.6.x, as well.  If you have any questions or feedback, send us your input – either on our Developers’ Forum or leave a comment below.

 

OpenNebula allows the management of hybrid environments, offering end-users a self-service portal to consume resources from both VMware-based infrastructures and KVM based ones in a transparent way to the user.

In order to smooth the way of migrating workloads to and from KVM to VMware, OpenNebula is working on an awesome feature for its next release (5.8, and probably backported to 5.6.x).

Here’s a quick review of the complete flow of a translation from vmdk to qcow2:

Export image to OpenNebula Marketplace

Suppose you have an vmdk image and you want to use it in KVM. At this point, we can have different situations.

  • We have the image in vmdk datastore. We would need to upload the image to a marketplace.
  • We want to use an image provided by OpenNebula marketplace. In this case, we have to do nothing.

Import to vmdk

Now we know what image we want to use so, we have to download the image to the datastore where we want to store the image. Once the image is downloaded, we are ready to use it!

Add it to a VM Template

Every image downloaded from a MarketPlace, OpenNebula creates a template in order to use it easily when the destination is vcenter, if not we have to create a template with the image. Now we can update the template with hosts, more images, network etc.

Implementation

From an implementation perspective:

  • Every time we download an image, it is downloaded to the frontend.
  • Then, it is converted to the proper type with the “qemu-img convert” command.
  • Finally, OpenNebula will copy the image to the datastore.

From vmdk to qcow2:  We only take into account that we have to set the bus driver.

From qcow2 to vmdk:  Here the limitation is when we convert an image to vmdk, we lose the contextualization. What we have to do is install VMware tools.

Let’s walk through an example

Imagine that we have a qcow2 image within the OpenNebula MarketPlace and we want to use it in vCenter.

1.)  We select the image:

2.)  Check that the image has qcow2/raw image, then click on the download button.

3.)  Now, we select the vcenter destination datastore.

4.)  As we are going to export the image to a vcenter datastore, we should have a void template in order to attach the new image.

5.)  Now we are ready to instantiate the template and we will see the vm in vcenter.

6.)  And now, we have VNC working.

As I mentioned, the plan is for this capability to be included in the upcoming version 5.8, and we will likely backport it to version 5.6.x, as well.  If you have any questions or feedback, send us your input – either on our Developers’ Forum or leave a comment below.

Universities are nowadays facing the challenge of adapting themselves to a new generation of students, the so-called Millennial generation, for whom technological devices are essential tools to carry out their daily tasks.

At the Université Catholique de Louvain, we wanted to be aligned with the new technological trends and thus be able to embrace BYOD in order to give our students the possibility to use their own devices to access the same software as if they were in the standard computer classrooms from anywhere and anytime.

VDI was the key to meet our needs, and we were clear that we wanted the OpenNebula orchestrator to be the cornerstone of our virtual desktop infrastructure. The UDS Enterprise VDI & vApp connection broker, which is fully compatible with OpenNebula, was the missing component.

Thanks to OpenNebula + UDS Enterprise VDI joint solution, we are progressively giving our 37,000 students access to Windows or Linux standard environments from outside the classrooms. Teachers now have a teaching environment independent from their own computer and researchers can access software on demand and with better calculation performance, thanks to remote applications.

All with a high availability VDI infrastructure, very easy to access and manage, and with lower costs for students and the IT Department.

If you’d like to know how our VDI infrastructure was built, the different components used and their role in the platform, and how the IT staff deploys and manages the virtual desktops, don’t miss our talk in the next OpenNebula Conf 2018 in Amsterdam: “UCLouvain Case Study – VDI for 37,000 students with OpenNebula.

There, we will also explain how we are extending the use of OpenNebula to remote applications and what high availability infrastructure we are now implementing to guarantee a 24/7 available service.

Hope to see you next Tuesday, November 13 in Amsterdam!

Infrastructure as Code (IaC) is changing the way that we’re doing things. Some people think that it’s the motorway that we have to follow and be aligned with business, as a resume they want us to be agile.

The arrival of tools such as Ansible, Puppet, SaltStack, and Chef, have enabled sysadmins to maintain modular, automatable infrastructure. This time I would like to introduce the Terraform tool.

Terraform is a provisioning declarative tool that is based on the Infrastructure as Code paradigm. Terraform is a multipurpose composition tool: it composes multiple tiers (SaaS/PaaS/IaaS).

Terraform is not a cloud agnostic tool, but in combination with OpenNebula, it can be amazing. By taking advantage of the template concept it will allow us to deploy vm’s agnostically in different cloud providers, such as AWS, Azure or on premise cloud infrastructure.

From the OpenNebula community we can observe several Terraform providers that have been developed. The first example is the project started by the Runtastic team that has recently been enhanced by Blackberry.

After this little introduction about Terraform, let’s go with a tutorial where a PaaS Rancher platform is deployed in an automated way with Terraform and RKE.

Deploy Rancher HA in OpenNebula with Terraform and RKE

Install Terraform

To install Terraform, find the appropriate package for your system and download it

$ curl -O https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip

After downloading Terraform, unzip the package

$ sudo mkdir /bin/terraform
$ sudo unzip terraform_0.11.10_linux_amd64.zip -d /bin/terraform

After installing Terraform, verify the installation worked by opening a new terminal session and checking that Terraform is available.

$ export PATH=$PATH:/bin/terraform
$ terraform --version
Add Terraform providers for Opennebula and RKE

You need to install go first: https://golang.org/doc/install

After go is installed and set up, just type:

$ go get github.com/blackberry/terraform-provider-opennebula
$ go install github.com/blackberry/terraform-provider-opennebula 

Copy your terraform-provider-opennebula binary in a folder, like /usr/local/bin, and write this in ~/.terraformrc:

providers {
opennebula = "/usr/local/bin/terraform-provider-opennebula"
}

providers {
rke = "/usr/local/bin/terraform-provider-rke"
}

For RKE provider, download the binary and copy in the same folder:

$ wget https://github.com/yamamoto-febc/terraform-provider-rke/releases/download/0.5.0/terraform-provider-rke_0.5.0_linux-amd64.zip 
$ sudo unzip terraform-provider-rke_0.5.0_linux-amd64.zip -d /usr/local/bin/terraform-provider-rke

Install Rancher

Clone this repo:
$ git clone https://github.com/CSUC/terraform-rke-paas.git
Create infrastructure

First we have to initialize Terraform simply with:

$ terraform init

We let Terraform create a plan, which we can review:

$ terraform plan

The plan command lets you see what Terraform will do before actually doing it.

Now we execute:

$ terraform apply

That’s it – you should have a functional Rancher server:

Now, you can install the Docker Machine OpenNebula Driver and deploy new Kubernetes clusters in your Rancher platform:

The complete tutorial is available at Github:

https://github.com/CSUC/terraform-rke-paas

If you are interested in more details, don’t miss the talk: Hybrid Clouds: Dancing with “Automated” Virtual Machines in the next OpenNebula Conf 2018 in Amsterdam.

See you cloudadmins!

Barcelona UserGroup Team –  www.cloudadmins.org