Here’s our monthly newsletter with the main news from the last month, including what you can expect in the coming months.

Technology

This month biggest milestone was the release of the latest stable version of OpenNebula 3.6 Lagoon. This release is focused on stabilizing the features introduced in OpenNebula 3.4, improving the performance of some existing features, and adding new features for virtualization management and integration with the new OpenNebula Marketplace. This release was received with substantial coverage by the specialized press, like GigaOM, HPCintheCloud and the Linux Foundation.

Also, the Pro version of OpenNebula 3.6 was released shortly after by C12G Labs, certified by the testing processes which delivered several bugfixes that were patched over the community version. OpenNebulaPro is provided under open-source license to customers and partners on an annual subscription basis through the OpenNebula.pro Support Portal. The subscription model brings several additional benefits in terms of long term multi-year support, integration and production support with professional SLAs, regular maintenance releases, product influence, and privacy and security guarantee, and all at a competitive cost without high upfront fees.

Additionally, a set of packages were developed to aid in the contextualization of guest images by OpenNebula, smoothing the process of preparing images to be used in an OpenNebula cloud. Installing these packages on a guest linux instance (there are packages for the main linux distros) will ensure that the guest are prepared to use the information passed through the OpenNebula contextualization mechanism.

The OpenNebula Marketplace is growing at a good pace, with newly added images like the Virtual Router which can automatically configure a wide variety of networking services. Moreover, a screencast was made available showing how easy is to import an appliance from the Marketplace to your local infrastructure using Sunstone.

And last, but not least, a set of administration scripts for OpenNebula, based on the command line interface, was published to aid cloud admins in common, daily tasks.

Community

A very busy month in the OpenNebula community, with the highlight of China Mobile’s post describing their Big Cloud cloud computing software stack, and how and why they chose OpenNebula as their core component to distributedly manage VMs. We are thrilled about our upcoming collaboration with China Mobile to accommodate their requirements, having them declared their willingness to contribute back in order to improve OpenNebula.

Another good piece of community contribution was performed by Shankhadeep Shome in his blog post about an OpenNebula numa-aware VM balancer using cgroup scheduling. This work comes from his experience implementing a virtual Hadoop cluster using OpenNebula, which is exactly the sort of feedback we love to hear about.

A great contribution was added by AGH University of Science and Technology, with a new set of drivers for OpenVZ, one of the providers of the container-based virtualization for Linux. This broadens the already wide selection of hypervisors supported by OpenNebula ecosystem components.

And we also have a new ecosystem component! The Contrail Virtual Execution Platform. Moreover, cloud.b.labs continues with his series of rather good tutorials, this one explaining how to create windows server 2008 VMs in clustered node machines.

Outreach

This past month a number of events were participated by OpenNebula members:

During this summer, members of the OpenNebula team will be participating in several Cloud events:

  • CloudOpen 2012, Linux Foundation, San Diego, USA, August 29-31, 2012
  • FrOSCon 7, Bonn, Germany, August 25-26, 2012

If you will be attending any of these events and want to meet with a member of the OpenNebula team, drop us a line at contact@opennebula.org.

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.

C12G Labs has just announced an update release of OpenNebulaPro, the enterprise edition of OpenNebula. OpenNebulaPro integrates the most recent stable version of OpenNebula (3.6) with the bug, performance, and scalability patches developed by the community and by C12G for its customers and partners.

OpenNebula 3.6 (codename Lagoon), released one month ago, featured several enhancements focused on making it a more robust and user friendly cloud manager. The main new feature is the new hotplugging mechanism for disk volumes that supports attaching either volatile volumes or existing images to a running VM. Also, Quota and Accounting tools were re-written from scratch , so now they are included in the OpenNebula core to enhance their integration with the existing AuthZ & AuthN mechanisms and other related tools (e.g. Sunstone). Some new features like VM reschedulinghard reboots, or cloning of disk images were added as well. Apart from the core, Lagoon came with improvements in other systems, especially in Sunstone’s interface with the redesign of several tabs as well as in the OpenNebula Zones. There was also an important milestone reached in this release, with OpenNebula 3.6 fully integrated with the new OpenNebula Marketplace. Any user of an OpenNebula cloud can very easily find and deploy virtual appliances through familiar tools like the SunStone GUI or the OpenNebula CLI. The OpenNebula Marketplace is also of the interest of any software developer to quickly distribute a new appliance, making it available to all OpenNebula deployments worldwide.

OpenNebulaPro is used by corporations, research centers and governments looking for a hardened, certified, long-term supported cloud platform. OpenNebulaPro combines the rapid innovation of open-source with the stability and long-term production support of commercial software. Compared to OpenNebula, the expert production and integration support of OpenNebulaPro and its higher stability increase IT productivity, speed time to deployment, and reduce business and technical risks. Compared to other commercial alternatives, OpenNebulaPro is an adaptable and interoperable cloud management solution that delivers enterprise-class functionality, stability and scalability at significantly lower costs.

OpenNebulaPro is provided under open-source license to customers and partners on an annual subscription basis through the OpenNebula.pro Support Portal. The subscription model brings several additional benefits in terms of long term multi-year support, integration and production support with professional SLAs, regular maintenance releases, product influence, and privacy and security guarantee, and all at a competitive cost without high upfront fees. C12G Labs also offers professional services to help at any stage of cloud computing adoption with OpenNebula.

C12G offers a 30-day, no-cost, and no-commitment trial of OpenNebulaPro with the services to assess its suitability and performance in your environment.

Big Cloud is the cloud computing software stack developed by China Mobile Research Institute to support China Mobile‘s operation platform and provide services to its more than 600 million customers.

BC-EC, Big Cloud Elastic Computing, chose OpenNebula as its core component to manage and schedule the virtualization infrastructure in 2008. Since then, we are glad to see OpenNebula to be more full-fledged each day. And BC-EC, matured with OpenNebula, was used both in China Mobile’s internal business and ready to provide public service.

BC-EC includes 3 parts: web portal providing self-service entry, front-end management service providing service and operation management, user management and billing etc, and service database.  Front-end can provide similar functions as OZone to manage several OpenNebula back-ends.

Recently China Mobile prepared to publish a public cloud service based on BC-EC solution. This cloud includes 1000 servers, 700 servers of those will provide virtual machine computing service based on BC-EC and another 300 servers will provide cloud storage based on Big Cloud’s object store system, named Onest.

We hope to make deep cooperation with OpenNebula community to improve it with our requirements and experiences, contributing bug fixes and developing new features.

We know that creating new Virtual Appliances can be sometimes cumbersome. To help you creating them a new set of packages were developed so the preparation of these images to work with OpenNebula is a breeze. They are compatible with:

  • Ubuntu >= 11.x
  • Debian Squeeze
  • CentOS 6.x
  • RHEL 6.x

These packages will prepare udev rules so you wont have problems after the first start and will also add the contextualization scripts to configure the network and any other subsystem or software using the contextualization CDROM.

More information in the Contextualization Packages for VM Images guide.

Using the power of OpenNebula’s contextualization we have built an appliance that when deployed in OpenNebula it will automatically configure a wide variety of networking services, namely:

  • DHCP server
  • NTP server
  • Public network masquerading
  • Port forwarding
  • DNS server

Using this virtual router is very easy: import the appliance to your local infrastructure, create a new template using this image and define the CONTEXT following the documentation.

Download

A new episode of the screencast series is now available at the OpenNebula YouTube Channel.

This screencast shows how easy is to import an appliance from the OpenNebula Marketplace to your local infrastructure using Sunstone.

Enjoy the screencast!

One of the strong points of OpenNebula is its UNIXy feel. It’s easy to create scripts that wrap around OpenNebula CLI commands, using die-hard UNIX tools such as AWK, sed, etc, to do specific tasks. I would like to share a collection of scripts I’ve been using almost on a daily basis for quite a while now, that improves the user experience in the shell.

one-tools

Contributions, ideas and improvements are more than welcome. I’m looking forward to seeing (and including) your OpenNebula tricks and hacks in the shell!

I’ve recently had the joy of building out a Hadoop cluster (Cloudera Distribution with Cloudera Manager) for internal development at work. The process was quite tedious as a large number of machines had to be hand configured. This seemed like a good application to move to a cloud infrastructure in order to improve cluster deployment times as well as to provide an on-demand resource for this workload (see Cloudera Hadoop Distribution and Manager). OpenNebula was the perfect choice for the compute oriented cloud architecture because the resources can be allocated/removed on demand into the hadoop environment. As hadoop is a fairly memory intensive workload we wanted to improve memory throughput on the VMs and group scheduling showed some promise to improve VM cpu and memory placement.

About our Infrastructure

We are working with Dell C6145 which are 8-way NUMA systems based on AMD quad-socket 12-core Mangy-Cours processors, note that each socket has 2 numa nodes. An interesting thing about these systems is that even though they are quad socket, they have 8 NUMA domains! We wanted to see if group scheduling can be used to improve performance on these boxes by compartmentalizing VMs so that memory accesses between numa domains can be minimized and to improve L2/L3 cache hits.

The Linux numa-aware scheduler already does a great job however we wanted to see if there was a quick and easy way to allocate resources on these numa machines to reduce non-local memory access and improve memory throughput and in turn improve memory sensitive workloads like Hadoop. A cpuset is a combination of memory and cpu configured as a single scheduling domain. Libvirt, the control API used to manage KVM has some capabilities to map vcpus to real cpus and even configure them in a virtual NUMA configuration mimicking the host its running on; however we found it very cumbersome to use because each VM has to be hand tuned to get any advantage. It also defeats the OpenNebula paradigm of rapid template based provisioning.

Implementation

Alex Tsariounov wrote a very user friendly program called cpuset that does all the heavy lifting of moving processes from one cpu set to another. The source is available from google code repository or from Ubuntu 12.04+ repository.

http://code.google.com/p/cpuset/

I wrote a python wrapper script building on cpuset, which adds the following features:

  • Creates CPU sets based on the numactl –hardware output
  • Maps CPUs and their memory domains into their respective CPU set
  • Places KVM virtual machines built using libvirt into cpuset using a balancing policy
  • Rebalances VMs based on a balancing policy
  • Runs ones then exits so that system admins can control when and how much balancing to do.

Implementation – Example

Cpuset without numa configuration, this is the status of most systems without group scheduling configured. The CPUs column describes the number of cpus in that particular scheduling domain, same for the memory domain. In this system there are 48 cores (0-47) and 8 Numa nodes (0-7).

root@clyde:~# cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root       0-47 y     0-7 y   490    1 /
      libvirt      ***** n   ***** n     0    1 /libvirt

Cpuset after vm-balancer.py run without any vms running.. Notice how the cpus and memory domains have been paired up.

root@clyde:~# cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root       0-47 y     0-7 y   489    9 /
      libvirt      ***** n   ***** n     0    1 /libvirt
        VMBS7      42-47 n       7 n     0    0 /VMBS7
        VMBS6      36-41 n       6 n     0    0 /VMBS6
        VMBS5      30-35 n       5 n     0    0 /VMBS5
        VMBS4      24-29 n       4 n     0    0 /VMBS4
        VMBS3      18-23 n       3 n     0    0 /VMBS3
        VMBS2      12-17 n       2 n     0    0 /VMBS2
        VMBS1       6-11 n       1 n     0    0 /VMBS1
        VMBS0        0-5 n       0 n     0    0 /VMBS0

VM balancer and cset in action, moving 8 newly created KVM processes and their threads (vcpus and iothreads to a numa core)

root@clyde:~# ./vm-balancer.py
Found cset at /usr/bin/cset
Found numactl at /usr/bin/numactl
Found virsh at /usr/bin/virsh
cset: --> created cpuset "VMBS0"
cset: --> created cpuset "VMBS1"
cset: --> created cpuset "VMBS2"
cset: --> created cpuset "VMBS3"
cset: --> created cpuset "VMBS4"
cset: --> created cpuset "VMBS5"
cset: --> created cpuset "VMBS6"
cset: --> created cpuset "VMBS7"
cset: moving following pidspec: 47737,47763,47762,47765,49299
cset: moving 5 userspace tasks to /VMBS0
[==================================================]%
cset: done
cset: moving following pidspec: 46200,46203,46204,46207
cset: moving 4 userspace tasks to /VMBS1
[==================================================]%
cset: done
cset: moving following pidspec: 45213,45210,45215,45214
cset: moving 4 userspace tasks to /VMBS2
[==================================================]%
cset: done
cset: moving following pidspec: 45709,45710,45711,45705
cset: moving 4 userspace tasks to /VMBS3
[==================================================]%
cset: done
cset: moving following pidspec: 46719,46718,46717,46714
cset: moving 4 userspace tasks to /VMBS4
[==================================================]%
cset: done
cset: moving following pidspec: 47306,47262,49078,47246,47278
cset: moving 5 userspace tasks to /VMBS5
[==================================================]%
cset: done
cset: moving following pidspec: 48247,48258,48252,48274
cset: moving 4 userspace tasks to /VMBS6
[==================================================]%
cset: done
cset: moving following pidspec: 48743,48748,48749,48746
cset: moving 4 userspace tasks to /VMBS7
[==================================================]%
cset: done

After VMs are balanced into their respective numa domains, note that there are 3 VCPUs per VM and 1 parent process, the vm that has 5 threads is actually running a short lived iothread.

root@clyde:~# cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root       0-47 y     0-7 y   500    9 /
        VMBS7      42-47 n       7 n     5    0 /VMBS7
        VMBS6      36-41 n       6 n     4    0 /VMBS6
        VMBS5      30-35 n       5 n     4    0 /VMBS5
        VMBS4      24-29 n       4 n     4    0 /VMBS4
        VMBS3      18-23 n       3 n     4    0 /VMBS3
        VMBS2      12-17 n       2 n     4    0 /VMBS2
        VMBS1       6-11 n       1 n     4    0 /VMBS1
        VMBS0        0-5 n       0 n     4    0 /VMBS0
      libvirt      ***** n   ***** n     0    1 /libvirt

The python script can be downloaded form http://code.google.com/p/vm-balancer-numa/downloads/list for now.

Note. This was inspired by work presented at the KVM Forum 2011 by Andrew Theurer, we even used the same benchmark to test our configuration.
http://www.linux-kvm.org/wiki/images/5/53/2011-forum-Improving-out-of-box-performance-v1.4.pdf

In the next part I will explain how the vm-balancer.py script works and it’s limitations.

The OpenNebula project is proud to announce the availability of OpenNebula 3.6 (Lagoon). This release is focused on stabilizing the features introduced in OpenNebula 3.4, improving the performance of some existing features, and adding new features for virtualization management and integration with the new OpenNebula Marketplace.

OpenNebula 3.6 features a new hotplug mechanism for disk volumes that supports attaching either volatile volumes or existing images to a running VM. Also for OpenNebula 3.6 we have re-written from scratch the Quota and Accounting tools, so now they are included in the OpenNebula core to enhance their integration with the existing AuthZ & AuthN mechanisms and other related tools (e.g. Sunstone). There are some other new features like VM rescheduling, hard reboots, cloning of disk images, support for per-cluster definition of system datastores…

OpenNebula 3.6 also features improvements in other systems, especially in Sunstone’s interface with the redesign of several tabs as well as in the OpenNebula Zones where we got rid of the datamapper dependency to ease the packaging of OpenNebula.

Last but not least, OpenNebula 3.6 is fully integrated with the new OpenNebula Marketplace. Any user of an OpenNebula cloud can very easily find and deploy virtual appliances through familiar tools like the SunStone GUI or the OpenNebula CLI. The OpenNebula Marketplace is also of the interest of any software developer to quickly distribute a new appliance, making it available to all OpenNebula deployments worldwide.

This is a final release and it is aimed at production environments, any infrastructure running a previous version is recommended to upgrade.

As usual OpenNebula releases are named after a Nebula. The Lagoon Nebula (also known as M8, or NGC 6523) is a giant interstellar cloud in the constellation Sagittarius.

Relevant Links

The one-ovz-driver project bridges the gap between OpenVZ and OpenNebula by enabling the management of an OpenVZ containers.

OpenVZ is one of the providers of the container-based virtualization for Linux. Lightweight virtualization uses different approach than other hypervisors such as Xen or VMware, offering better solution in some scenarios. More information can be found on the OpenVZ project page: http://wiki.openvz.org/Main_Page.

OpenVZ drivers for OpenNebula consists of two main parts:

  1. Virtualization Manager Driver
    • deploy (including contextualization, deployment file’s <RAW> nodes passed to vzctl during container creation)
    • cancel, reboot, shutdown
    • livemigration
    • poll
    • restore & save
  2. Information Manager Driver
    • beside basic probes there are OpenVZ specific ones such as: CURRENT_CPU_UTIL, NODE_CPU_POWER or MEMORY_ALLOC

Technically, drivers are entirely written in ruby, built atop a ruby-openvz gem: https://github.com/sts/ruby-openvz/.

The project is hosted on github: https://github.com/dchrzascik/one-ovz-driver and licensed under Apache License 2.0.

Further details related to driver’s installation and configuration can be found on project wiki pages.

Authors:
Dariusz Chrząścik, Marta Ryłko, Radosław Morytko, Marcin Jarząb from AGH University of Science and Technology