OpenNebula at Cloud Expo Europe 2014

A few days ago we were at the Cloud Expo Europe 2014 event in London. As part of the Open Cloud Forum sessions about open source cloud solutions, there was an OpenNebula tutorial.

Now, this is a hands-on tutorial where attendees are supposed to follow the slides and build their own small OpenNebula installation in a virtual environment, and the people that showed up were not really interested in replicating the tutorial in their laptops… But after the initial let-down, it turns out this was a very engaged audience that showed a great interest! Because the introduction and basic configuration tutorial was done fairly quickly, we had time to continue with a question & answer session that lasted more than the tutorial itself.

Bhe1npzIIAElCkQ

There were some common questions we get from time to time:

“It looks far better that I expected for what I thought was a research-only project”. Well, OpenNebula is a solid product, and it has been ready to be used in production for quite some time. Take a look at the featured users page.

“But what if I need a level of support that an open source community cannot guarantee?” Good news! C12G Labs, the company behind OpenNebula, has you covered. The best thing is that the commercial support is offered for the same open source packages available to anyone.

“Is the VMware support on par with the other hypervisors?” Absolutely! All the features are supported. You can even use a heterogeneous environment with the VMware hosts grouped into a cluster, working alongside a KVM or Xen cluster.

We also had time to talk about advanced OpenNebula features. Our documentation is quite big and reading all of it is definitely not appealing, but if you are starting with OpenNebula I recommend you to at least skim through all the sections. You may find out that you have several storage options, that OpenNebula can manage groups of VMs and has auto scaling features, or that VM guests can report back to ONE.

People were also very interested in the customization capabilities of OpenNebula. Besides the powerful driver mechanism that allows administrators to tailor the exact behaviour of OpenNebula, you can also customize the way it looks. The CLI output can be tweaked in the etc configuration files, and Sunstone can adjusted down to which buttons are shown with the Sunstone Views.

Thanks to the engaged audience for their great interest and their feedback. See you next year!

Hands-on Tutorial at Cloud Expo Europe 2014, London

We are happy to announce that next February 27 we will be giving a tutorial at the Open Cloud Forum event, that will take place at Cloud Expo Europe 2014, London.

open_cloud_forum

This hands-on tutorial will give an overview of how OpenNebula is used to build and operate private clouds. The target audience is devops and system administrators interested in deploying a private cloud solution, or in the integration of OpenNebula with other platform. The attendees will build, configure and operate their own OpenNebula cloud in their laptops, using two VirtualBox virtual machines.

Don’t miss this great conference, register now for free!

OpenNebula 4.4: Multiple System Datastore with Storage Load Balancing

This is the third entry in a blog post series explaining how to make the most out of your OpenNebula 4.4 cloud. In previous posts we explained the enhanced cloud bursting to Amazon features and the multiple groups functionality.

OpenNebula supports different storage backends. You can even create VMs that use disks from several backend technologies at the same time, e.g. Ceph and LVM.

The system datastore is a special Datastore class that holds disks and configuration files for running VMs, instead of Images. Up to OpenNebula 4.2, each Host could only use one system datastore, but now for OpenNebula 4.4 we have added support for multiple system datastores.

Maybe the most immediate advantage of this feature is that if your system datastore is running out of space, you can add a second backend and start deploying new VMs in there. But the scheduler also knows about the available system datastores, and that opens up more interesting use cases.

Let’s see a quick example. You have a local SSD disk inside each Host, and also an NFS export mounted. If you define a tag in the datastore template:

$ onedatastore show ssd_system
 ...
 SPEED = 10
$ onedatastore show nfs_system
 ...
 SPEED = 5

Those tags can be used in the VM template to request a specific system datastore, or to define the deployment preference:

# This VM will deployed preferably in the SSD datastore, but will fall back to the NFS one if the former is full
$ onetemplate show 2
...
SCHED_DS_RANK = "SPEED"

# This other VM must be deployed only in the ssh system datastore
$ onetemplate show 1
...
SCHED_DS_REQUIREMENTES = "NAME = ssd_system"

What about the load balancing mention in the title? Instead of different storage backends, you may want to install several similar system datastores, and distribute your VMs across them. This is configured in the sched.conf file, using the ‘striping’ policy.

Looking for an old school system DS. There must be like 20 MB combined here.

We hope you find these improvements useful. Let us know what you think in the mailing lists!

And We Are Back From EGI TF 2013

What an interesting week we’ve had at EGI TF and the Cloud Interoperability Week! We had the opportunity to meet old friends, shake hands with users that we only knew by email, and also had the chance to thank some of our community contributors personally.

Most of the people we spoke with were already OpenNebula users, so we had a great time hearing from their use cases, customizations and gathering feature requests.

The presentation was followed by an interesting session of questions and answers, where different cloud technologies were represented. You can get the presentation slides from our slideshare account.

We were a bit concerned about the small time slot assigned to the hands-on tutorial on Thursday, but things went smoothly and practially all attendants managed to get their own OpenNebula cluster with 2 nodes and a couple of VMs. They even had time to configure the rOCCI server and play a bit with it.

See you next time!

OpenNebula at the EGI Technical Forum & Cloud Interoperability Week

Next week we will be busy at the EGI Technical Forum & Cloud Interoperability Week events in Madrid, Spain.

If you are attending, make sure you don’t miss these presentations:

Oh, and we will be around in our booth (#16), so come and see us to talk about HPC, Cloud Computing, and play with our live OpenNebula demo.

See you there!

A Preview of the New Multiple Group Functionality in OpenNebula 4.4

OpenNebula has a very flexible approach to user and resource management. You can organize your users in groups or VDCs, allow them to manage their own resouces with permissions, restrict how much resources users or groups can consume, or implement intricate use cases with ACL rules.

For OpenNebula 4.4. we will also introduce secondary groups. These will work in a similar way to the unix groups: users will have a primary group, and optionally several secondary groups. This new feature is completely integrated with the current mechanisms mentioned above allowing, for example, to perform the following actions:

  • The list of images visible to a user contains all the images shared within any of his groups.
  • You can deploy a VM using an Image from one of your groups, and a second Image from another group.
  • New resources are created in the owner’s primary group, but users can later change that resource’s group.
  • Users can change their primary group to any of their secondary ones.

And, as always, secondary groups can be easily managed through our Sunstone web interface:

Stay tuned for the beta release, we’ll be happy to get your feedback!

Screencasts – OpenNebula 4.2 Features

Here are three short screencasts that give an overview of some of the new features introduced in OpenNebula 4.2 from the user’s perspective. Enjoy!

Sunstone Self-Service Cloud View

The Cloud view is one of the configurable Sunstone views.

This simplified view is intended for cloud consumers that just require a portal where they can provision new virtual machines easily. They just have to select one of the available templates and the operating system that will run in this virtual machine.

OneFlow Service Deployment

Thanks to the OneFlow component introduced in OpenNebula 4.2, related virtual machines can be grouped into a Service. A Service can define deployment dependencies, as shown in this screencast.

 OneFlow Service Auto-Scaling

This screencast shows the OpenNebula Auto-Scaling features. A Service (group of interconnected VMs) can adjust the number of VMs based on performance metrics, or based on a schedule.

A Preview of the New Scheduling Features in OpenNebula 4.0

We are working hard developing new features and overall improvements for OpenNebula 4.0. Before the new Sunstone eye-candy is revealed and steals all the attention, we wanted to share some of the other features, those small pieces that won’t wow anyone, but combined together make OpenNebula the best option when it comes to integration capabilities.

In this post I will focus on the Scheduler related features.

Schedule any action, not only deployments

We have added a '--schedule' option to most of the VM life-cycle commands. Users can schedule one or more VM actions to be executed at a certain date and time, for example:

$ onevm shutdown 0 --schedule "05/25 17:45"
VM 0: shutdown scheduled at 2013-05-25 17:45:00 +0200

$ onevm cancel 0 --schedule "05/25 18:00"
VM 0: cancel scheduled at 2013-05-25 18:00:00 +0200

$ onevm show 0
SCHEDULED ACTIONS
ID ACTION        SCHEDULED         DONE MESSAGE
 0 shutdown    05/25 17:45            -
 1 cancel      05/25 18:00            -

These actions can be edited or deleted updating the VM template (another new feature that you will see in 4.0).

$ onevm update 0

SCHED_ACTION=[
  ACTION="shutdown",
  ID="0",
  TIME="1369496700" ]
SCHED_ACTION=[
  ACTION="cancel",
  ID="1",
  TIME="1369497600" ]

Placement expressions improvements

We have also improved the requirements and rank expressions making them more useful. Until now you could choose to deploy VMs in a certain cluster, using the following expression:

SCHED_REQUIREMENTS = "CLUSTER = production"

But now you can also use variables from the Host’s Cluster template. Let’s say you have the following scenario:

$ onehost list
  ID NAME            CLUSTER   RVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   1 host01          cluster_a   0       0 / 200 (0%)     0K / 3.6G (0%) on
   2 host02          cluster_a   0       0 / 200 (0%)     0K / 3.6G (0%) on
   3 host03          cluster_b   0       0 / 200 (0%)     0K / 3.6G (0%) on    

$ onecluster show cluster_a
CLUSTER TEMPLATE
QOS="GOLD"

$ onecluster show cluster_b
CLUSTER TEMPLATE
QOS="SILVER"

You can now use these expressions:

SCHED_REQUIREMENTS = "QOS = GOLD"

SCHED_REQUIREMENTS = "QOS != GOLD & HYPERVISOR = kvm"

If you use the ruby, java or xml-rpc APIs you will be familiar with the XML representation of the resources. The variable names used in rank and requirements expressions can be child elements of HOST, HOST/HOST_SHARE or HOST/TEMPLATE. From now on, you can also use XPath to have more control over the element used:

SCHED_RANK="/HOST/HOST_SHARE/FREE_CPU"
SCHED_REQUIREMENTS="/HOST/TEMPLATE/CUSTOM_VAR != 5"

Scheduler decisions explained

By default, regular users can’t see the Hosts or open the Scheduler log file, and when their VMs can’t be deployed it may look like something is broken. In the next version, the scheduler will add a message to the VM templates to let users know if there are any problems:

$ onevm show 4
USER TEMPLATE
SCHED_MESSAGE="Wed Feb 20 11:43:55 2013 : No hosts enabled to run VMs"

$ onevm show 5
USER TEMPLATE
SCHED_MESSAGE="Wed Feb 20 11:52:11 2013 : No host meets the SCHED_REQUIREMENTS expression"
SCHED_REQUIREMENTS="FREE_CPU > 100 & FREE_CPU < 50"

Basic affinity

There is a new special variable that provides basic VM affinity functionality. Let’s see a sample use case: we want to create two identical VMs, and they should run on the same Host. In OpenNebula 4.0 you can create VMs directly on hold, so the scheduler won’t deploy them until you decide to release them:

$ onetemplate instantiate 0 --multiple 2 --hold
VM ID: 2
VM ID: 3

Now that we have the IDs, we can update one of the VM’s scheduler requirements.

$ onevm update 3
SCHED_REQUIREMENTS="CURRENT_VMS = 2"

$ onevm release 2,3

The scheduler will deploy first VM 2, and in the following cycle VM 3 in the same Host. This, of course, has some limitations because the scheduler will treat each VM individually, instead of as a group. If you need more advanced VM group management, give a try to the AppFlow component.

As always, we welcome any feedback. In fact, most of these changes are requests made by the community, I invite you to join our mailing lists and our development portal if you haven’t yet.

OpenNebula at LinuxCon Europe 2012

Last Monday we were at LinuxCon Europe, held in the beautiful city of Barcelona. Rubén gave a talk about building IaaS clouds and the art of virtual machine management, while he created a small OpenNebula private cloud in his laptop from scratch. He even had time to import a couple of images from the Marketplace, and boot them.

We would like to thank the Linux Foundation for the invitation to participate in this event, giving us the opportunity to chat with some of our users. We also had the chance to talk about the future of the open cloud with our beautiful open source sisters. What sisters? you may ask. Well, this is how Mårten Mickos referred in his keynote to the main 4 open source cloud technologies Eucalyptus, CloudStack, OpenStack, and OpenNebula in reply to VMware, and I hope the term catches on.

Here are the slides from the talk: