We are very happy to announce that the OpenNebula add-ons will be released under Apache license and incorporated into the main distribution of OpenNebula. The LDAP authentication, the accounting toolset and the VMWare support will be included in subsequent OpenNebula releases without needing to download any additional component. OpenNebula 3.2 support for VMware will also include the following new features that have been developed by C12G Labs for its customers and partners:
  • Support for VMware’s vMotion to allow live migration of VMs between VMware hosts, enabling load balancing between cloud worker nodes without downtime in the migrated VM.
  • Support for contextualization to provide a method to pass arbitrary data to a VM, enabling the configuration of the services at boot time.
  • Support for non-cloned, non-persistent disks, enabling the configuration of multiple Windows VMs using the same base non-persistent disk.
  • Support for SSH disk transfers, a replacement for the out-of-the-box shared filesystem Transfer Manager drivers, allowing the copy of the VM disks using the OpenSSH protocol, instead of relying on a shared datastore between the OpenNebula front-end and the VMware hypervisor hosts.

C12G Labs is pleased to announce that a new stable version of the OpenNebula VMware Addon has been contributed to the OpenNebula Project. The major objective with this release is to provide compatibility, and a much better integration, between the Addon and the new OpenNebula 3.0 and to include several bug fixes.

C12G Labs would like to point out that the OpenNebula Project fully endorses these extensions and supports them through the user mailing list. Moreover, the project ensures its full compatibility with current and upcoming releases of OpenNebula.

C12G Labs announced today a major new release of OpenNebulaPro, the enterprise edition of the OpenNebula Toolkit. The third generation of OpenNebula, released two months ago, is helping many organizations make the transition toward the next generation of cloud infrastructures by supporting multiple fully-isolated virtual data centers, advanced multi-tenancy with fine-grained access control, and multiple zones potentially hosted in different geographical locations. This new release has also brought important benefits to cloud users and administrators with a greatly improved SunStone GUI that provides easy access to all the new features in 3.0 and a new oZones GUI to manage zones and virtual data centers. Other features included in this release are new authentication methods with usage quotas, a VM template repository, a new monitoring and accounting service, and a new network subsystem with support for Open vSwitch and 802.1Q tagging. OpenNebula allows data centers to provide cloud services by leveraging their existing IT assets, instead of building a new system from the ground up, thus protecting existing investments and avoiding vendor lock-in.

OpenNebulaPro 3.0 integrates the most recent stable version of OpenNebula 3.0 with the bug, performance, and scalability patches developed by the community and by C12G for its customers and partners. Compared to OpenNebula, the expert production and integration support of OpenNebulaPro and its higher stability increase IT productivity, speed time to deployment, and reduce business and technical risks. Supported Linux distributions are RedHat Enterprise Linux, Suse Linux Enterprise, CentOS, Ubuntu, Debian and OpenSuse; supported hypervisors are VMware ESX/ESXi, KVM and Xen; and the supported cloud provider for cloudbursting is Amazon Web Services.

OpenNebulaPro 3.0 is provided under open-source license to customers and partners on an annual subscription basis through the OpenNebula.pro Support Portal. The subscription model brings several additional benefits in terms of long term multi-year support, production-level support with professional SLAs, integration support for optimal and scalable execution in any hardware and software combination, certification support to validate compatibility with complementary components and customizations, guaranteed patches, regular updates and upgrades, product influence, and privacy and security guarantee, and all at a competitive cost without high upfront fees.

November 18th, 2011. Today we announce the general availability of the first pre-release of OpenNebula 3.2. With this release we make our debut with a new development cycle that aims at rapidly delivering new features to the community and faster react to their needs and feedback.

The pre-release series are not suitable for production environments as you may find some rough edges. However the packages have gone through the standard testing procedure made for final releases so you should consider them stable. The first pre-release of OpenNebula 3.2 includes important new features and improvements in the security area, and in the management of networks, users and VM images.

With this release we also wanted to celebrate our 4th birthday. Happy testing everybody!

LINKS

Back in November 2007 (four years ago!) we published the first OpenNebula project website (see what it looked like back then, thanks to the Internet Archive), as we geared up for our first release of code (which did not take place until March 2008). The OpenNebula project was created as a way to transfer the main results of our cutting-edge research on efficient management of virtualization in large-scale distributed infrastructures and, since our first software release, OpenNebula has evolved into an active open-source project with a community that, by many measures, is more than doubling each year:

  • Website Access. From 35,842 visits and 285,965 page views in 2008 to 579,571 visits and 6,992,300 page views in 2011, which means a 150% and 190% average annual growth respectively. During the last week we had 15,300 visits, 194,000 page views, and 570,000 hits.
  • Mailing List. From 227 messages in 2008 to 4,341 in 2011, which means a 170% average annual growth. At present we have more than 800 registered users.
  • Downloads from Project Site. From 1,865 downloads in 2008 to 25,200 in 2011, which means a 140% average annual growth. In the last week, we had 900 downloads. These numbers do not include the OpenNebula packages distributed in openSUSE, Ubuntu or Debian, the downloads from our code repository, or the several cloud solutions embedding OpenNebula.
  • Codebase History. From 30,000 lines of code in 2008 to almost 300,000 in 2011. Another interesting fact about the source code is how OpenNebula effectively uses several programming languages and technologies. Nevertheless, each programming language has its relative strengths and provides unique features to meet the needs of the different components in the architecture. Ohloh provides a very nice interface to see inside OpenNebula development and to compare it with other open-source projects.

These stats highlight the success of our strategy to deliver a fully open-source, Apache-licensed cutting-edge technology with the stability, the integration capabilities, and the latest innovations in Data Center virtualization to enable the most demanding cloud environments. OpenNebula features address real needs of leading IT organizations that depend on OpenNebula for their production environments. The requirements of our users are the driving force behind all our development efforts and we recently announced a new release cycle to improve user satisfaction by rapidly delivering changes based on user requirements and feedback. In other words, giving users what they want more quickly, in smaller increments, while additionally increasing technical quality.

Congratulations everyone for this 4th birthday of OpenNebula!

Following the last release of OpenNebula 3.0, the OpenNebula project is moving to a rapid release development cycle. Our goal is to faster deliver new features and innovations to the community as well as better incorporate requirements of our users and feedback from the community.

With this change OpenNebula releases will react faster to fulfill the needs of IT organizations running production environments. Also we expect that delivering small functionality deltas will help to ease the transition to new releases, and to ease the maintenance of production deployments.

The OpenNebula release cycle is now structured as follows:

  • OpenNebula Releases will occur every three months. Prior to the official release date there will be a beta (two weeks before) and a candidate release (a week before). These two releases are feature-freeze and are mainly devoted to bug fixing and polishing. After each release, OpenNebula publishes the blue-prints for the next release to get feedback from the community.
  • The features for each release are prioritized and developed in three one-month sprints. At the end of each sprint there will be available an OpenNebula pre-release that incorporates the features and bugs solved in that sprint. The OpenNebula pre-releases go through the same testing and certification process as the official releases, i.e. you should expect the same levels of stability

The release plan for OpenNebula 3.2 is:

  • OpenNebula 3.2 Final will be released on December 20th. The blue-prints for this release can be found at the development portal
  • OpenNebula 3.2 will have a pre-release available by November 18th. This release incorporates the features developed during the first two sprints.

We’ll make our debut with this new process with an OpenNebula 3.2 pre-release this Friday, stay tuned for release notes and download instructions.

Here’s our monthly newsletter with the main news from the last month, including what you can expect in the coming month.

Technology

After the release of OpenNebula 3.0, we’ve continued to highlight some of its features. In particular, many features are the result of feedback from the High Performance Computing community. In the blog post Building a Cloud for High Performance Computing we highlighted many of the requirements from this community, and how OpenNebula meets those requirements.

This month we also released development versions of two new extensions for OpenNebula: Microsoft Hyper-V drivers and Xen Cloud Platform (XCP) drivers, as a result of our collaboration with Microsoft and with the Xen project, respectively.

C12G Labs also released a new version of its OpenNebula Addons, including the LDAP Authentication Module, the Accounting Toolset, and the VMware Driver. All of these have been updated for OpenNebula 3.0.

Community

CERN continues to succesfully use OpenNebula in production in several departments. A recent presentation from their IT Department provides a summary of their experiences with OpenNebula within lxcloud for batch virtualization and public cloud services. They recently contributed a blog post describing their experiences in building a private cloud in the Engineering Department at CERN.

This last month, more organizations were added to our list of featured users or announced updates in their OpenNebula offerings, including leading research centers like DESY (Germany’s larger accelerator center), and companies like ZeroNines, ClassCat and IPPON Technologies.

André Monteiro, from the Institute of Electronics and Telematics Engineering of Aveiro, contributed a detailed guide on how to use Windows VMs with OpenNebula.

We were excited to hear that the UK’s Cabinet Office lists OpenNebula as an open-source alternative to proprietary solutions for cloud computing. The Canadian Cloud Best Practices Council has also been exploring OpenNebula, and is preparing a white paper on how the OpenNebula can be used for Government Cloud Computing.

Apache Libcloud 0.6.1 was released with support for OpenNebula API v3.0, thanks to Hutson Betts.

Finally, a big thanks to all the community members who continue to develop software around OpenNebula or give OpenNebula presentations around the world. This month, we’d like to thanks Ken Barber from Puppetlabs for developing an OpenNebula Puppet module, Ethan Waldo for giving a talk on Deploying Rails to your own private cloud with OpenNebula and Cobbler at an Austin on Rails meeting, and Łukasz Oleś for developing Python bindings for the XMLRPC OpenNebula Cloud API.

Outreach

We have the following upcoming events:

  • Keynote at the OW2 Annual Conference, Paris, France, November 23-24th, 2011
  • OpenNebula will participate in the Open Source Virtualization and Cloud devroom at FOSDEM 2012 (Free and Open source Software Developers’ European Meeting), Brussels, Belgium, February 4-5, 2012.

Last month we participated in various outreach events, but we wanted to point one out in particular: we gave a talk titled Presentation of Group Efforts in OpenNebula Interoperability and Portability at the 5th International DMTF Workshop on Systems and Virtualization Management: Standards and the Cloud describing our work on interoperability and standards, such as OGF OCCI, SNIA CDMI y DMTF OVF.

Remember that you can see slides and resources from all past events in our Outreach page.

We have recently been working on deploying Windows VMs using OpenNebula templates and, although the process can be optimized, the basic steps are enough to automate this process and deploy dozens of Windows VMs simultaneously in a couple of minutes. We thought we would share our experience through the OpenNebula blog.

Basically, we developed a set of scripts that handle the contextualization of the Windows VM on boot. Using the OpenNebula context.sh script, ours scripts configure username, password, IP, gateway, DNS, hostname, Remote Desktop and even leave a README file on Desktop with credentials and recommendations.

To use these scripts, you must have a Windows machine installed on a raw or qcow file. Windows 2008 (64-bit) and Windows 7 (32/64-bit) have been successfully tested with this approach.

To give you an idea, here is an example of a Windows 2008 ONE template:

CONTEXT=[
  FILES="/opt/opennebula_shared/context-scripts/windows/startup.vbs
              /opt/opennebula_shared/context-scripts/windows/one-context.ps1 
             /opt/opennebula_shared/context-scripts/windows/unattend.xml
             /opt/opennebula_shared/context-scripts/windows/README.txt 
             /opt/opennebula_shared/context-scripts/windows/SetupComplete.cmd",
  HOSTNAME=Win2008-$VMID,
  IP_PUBLIC="$NIC[IP, NETWORK=\"Classes Network\"]",
  PASSWORD=thepassword,
  ROOT_PUBKEY=id_rsa.pub,
  USERNAME=theusername
]

CPU=1

DISK=[
  BUS=virtio,
  DRIVER=qcow2,
  READONLY=no,
  IMAGE_ID = 12,
  TARGET=vda,
  TYPE=disk 
]

FEATURES=[ ACPI=yes ]

GRAPHICS=[ TYPE=vnc ]

MEMORY=2048

NAME=Windows2008-SQL2008-VS2010

NIC=[
 MODEL=e1000,
 NETWORK_ID=4 
]

OS=[
  ARCH=x86_64,
  BOOT=hd 
]

TYPE=kvm 
]

We are already using these templates on our OpenNebula server at the Institute of Electronics and Telematics Engineering of Aveiro, and everything has been running smoothly for a month!

Feel free to comment or ask for help. Please visit our guide on Using Windows Images for new Virtual Machines for more information, including the contextualization scripts needed by the template.

Virtualization technology and cloud computing have brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or check pointed for later re-deployment. At European Organization for Nuclear Research (CERN), we have been using virtualization technology to quickly setup virtual machines for our developers with pre-configured software to enable them to quickly test/deploy a new version of a software patch for a given application. This article reports both on the techniques that have been used to setup a private cloud on commodity hardware and also presents the optimization techniques we used to remove deployment specific performance bottlenecks.

The key motivation to opt for a private cloud has been the way we use the infrastructure. Our user community includes developers, testers and application deployers who need to provision machines very quickly on-demand to test, patch and validate a given configuration for CERN’s control system applications. Virtualized infrastructure along with cloud management software enabled our users to request new machines on-demand and release them after their testing was complete.

Physical Infrastructure

Implementation

The hardware we use for our experimentation is HP Proliant 380 G4 machines with 8GB of memory, 500 GByte of disk and connected with Gigabit Ethernet. Five servers were running VMWare ESXi bare-metal hypervisor to provide virtualization capabilities. We also evaluated Xen hypervisor with Eucalyptus cloud but given our requirements for Windows VMs, we opted for VMWare ESXi. OpenNebula Professional (Pro) was used as cloud front-end to manage ESXi nodes and to provide users with an access portal.

Deployment architecture with OpenNebula, VMWare ESXi and OpenStack Glance image service.

A number of deployment configurations were tested and their performance was benchmarked. The configuration we tested for our experimentation are the following:

  • Central storage with front end (arch_1): a shared storage and OpenNebula Pro runs on two different servers. All VM images reside on shared storage all the time.
  • Central storage without front end (arch_2): a shared storage, using network filesystem (NFS), shares the same server with OpenNebula front end. All VM images reside on shared storage all the time.
  • Distributed storage remote copy (arch_3): VM images are deployed to each ESXi node at deployment time, and copied using Secure Shell (SSH) protocol by front end’s VMWare transfer driver.
  • Distributed storage local copy (arch_4): VM images are managed by an image manager service which downloads images pre-emptively on all ESXi nodes. Front end runs on a separate server and setup VM using locally cached images.

Each of the deployment configuration has its advantages and disadvantages. arch_1 and arch_2 use a shared storage model where all VM’s are setup on a central storage. When a VM request is sent to the front end, it clones an existing template image and sets it up on the central storage. Then it communicates the memory/networking configuration to the ESXi server, and pointing the location of the VM image. The advantage of these two architectural configurations is that it simplifies the management of template images as all of the virtual machine data is stored on the central server. The disadvantage of this approach is that in case of a disk failure on the central storage, all the VMs will lose data. And secondly, the system performance can be seriously degraded if shared storage is not high performance and doesn’t have high-bandwidth connectivity with ESXi nodes. Central storage becomes the performance bottleneck for these approaches.

arch_3 and arch_4 tries to overcome this shortcoming by using all available disk space on the ESXi servers. The challenge here is how to clone and maintain VM images at run time and to refresh them when they get updated. arch_3 resolves both of these challenges by copying the VM images at request time to the target node (using the VMWare transfer script add-on from OpenNebula Pro software), and when the VM is shut then the image is removed from the node. For each new request, a new copy of the template image is sent over the network to the target node. Despite its advantages, network bandwidth and ability of the ESXi nodes to make copies of the template images becomes the bottleneck. arch_4 is our optimization strategy where we implement an external image manager service that maintains and synchronize a local copy of each template image on each ESXi node using OpenStack’s Image and Registry service called Glance . This approach resolves both storage and network bandwidth issues.

Finally, we empirically tested all architectures to answer the following questions:

  • How quickly can the system deploy a given number of virtual machines?
  • Which storage architecture (shared or distributed) will deliver optimal performance?
  • What will be average wait-time for deploying a virtual machine?

Results

All four different architectures were evaluated for four different deployment scenarios. Each scenario was run three times and the results were averaged and are presented in this section. Any computing infrastructure when used by multiple users goes under different cycles of demand which results in reduced supply of available resources on the infrastructure to deliver optimal service quality.

We were particularly interested in following deployment scenarios where 10 virtual machines (each 10 GB each) were deployed:

  • Single Burst (SB): All virtual machines are sent in a burst mode but restricted to one server only. This is the most resource-intensive request.
  • Multi Burst (MB): All virtual machines were sent in a burst mode to multiple servers.
  • Single Interval (SI): All virtual machines were sent after an interval of 3 mins to one server only.
  • Multi Interval (MI): All virtual machines were sent after an interval of 3 mins to multiple servers. This is the least resource-intensive request.

Aggregated deployment times for various architectures

The results shows that by integrating locally cached images and managing them using OpenStack image services, we were able to deploy our VM’s with in less then 5 mins. Remote copy technique is very useful when image sizes are smaller but as the image size increases, and number of VM requests increases; then it adds up additional load on the network bandwidth and increases the time to deploy a VM.

Conclusion

The results have also shown that distributed storage using locally cached images when managed using a centralized cloud platform (in our study we used OpenNebula Pro) is a practical option to setup local clouds where users can setup their virtual machines on demand within 15mins (from request to machine boot up) while keeping the cost of the underlying infrastructure low.

After the release of OpenNebula 3.0 (IRIS)C12G Labs is pleased to announce that a development version of the OpenNebula Addons has been contributed to the OpenNebula Project. The major objective with this release is to provide compatiblity between the Addons and the new OpenNebula 3.0. The contributed components are:

  • LDAP Authentication Module that permits users to have the same credentials as in LDAP, so effectively centralizing authentication. Now tested against OpenNebula 3.0.
  • Accounting Toolset that visualizes and reports resource usage data, and allows their integration with chargeback and billing platforms. Now tested against OpenNebula 3.0.

The VMware Driver Addon, which enables the management of an OpenNebula cloud based on VMware ESX and VMware Server hypervisors, has been available since the Beta release of  OpenNebula 3.0.

C12G Labs would like to point out that the OpenNebula Project endorses these extensions and supports them through the user mailing list. Moreover, the project ensures its full compatibility with current and upcoming releases of OpenNebula.