The OpenNebula project is proud to announce the availability of the beta release of OpenNebula 3.4 (Wild Duck). The software brings countless valuable contributions by many members of our community, and specially from Research in Motion, Logica, Terradue 2.0, CloudWeavers, Clemson University, and Vilnius University.

This release is focused on extending the storage capabilities of OpenNebula, including support for multiple datastores. The use of multiple datastores provides extreme flexibility in planning the storage backend and important performance benefits, such as balancing I/O operations between storage servers, defining different SLA and QoS policies for different VM types or users, or easily scaling the cloud storage.

OpenNebula 3.4 also features improvements in other systems, especially in the core with the support of logic resource pools, the EC2 API with the support of elastic IPs, the Sunstone and Self-service portals with new cool features, and the EC2 hybrid cloud driver that now supports EC2 features like tags, security groups or VPCs.

With this beta release, Wild Duck enters feature freeze and we’ll concentrate on fixing bugs and smoothing some rough edges. This release is aimed at testers and developers to try the new features or to migrate existing drivers (specially TM drivers) to be compatible with the new version.

As usual OpenNebula releases are named after a Nebula. The Wild Duck Cluster (also known as Messier 11, or NGC 6705) is an open cluster in the constellation Scutum.

Relevant Links

Cluster Energy Saving system (CLUES) is an energy management system for High Performance Computing (HPC) Clusters and Cloud infrastructures that supports integration with OpenNebula. The main function of the system is to power off internal cluster nodes when they are not being used, and conversely to power them on when they are needed.

The energy saving results obtained depends on the usage of each specific infrastructure, the scheduling policies and the workload patterns. But it will obtain nice savings in case that some of the nodes are eventualy underused.

The CLUES scheduler is the main component, as it has to take decisions with respect to when to switch nodes on/off, or how many of them are switched on at a time, over-provision to prevent peaks on the demand, etc. The system is integrated with the specific infrastructure at two levels:

  • The CLUES system is integrated with the cluster management middleware by means of different plug-ins. The resource manager connectors provide a uniform way to interact with different resource managers (e.g. Torque/PBS or OpenNebula). Each plug-in consists of two parts:
    1. A monitoring system, used by the engine to obtain information on the state of each node.
    2. A job interceptor, that comes into action whenever a new job is to be submitted to a resource manager: before the job is actually submitted, the plug-in requests the necessary resources to the CLUES scheduler.
  • The integration with the physical infrastructure by means of different connectors, so that nodes can be powered on/off using the techniques which best suit each particular infrastructure. The method to switch nodes on and off can be tailored to the particularities of each cluster. CLUES is shipped with IPMI and Wake-on-Lan to power on the nodes, and SSH to power them off. It can also be integrated with most of the specific Power Device Units (PDU).
Once CLUES is properly configured to interact with the infrastructure, it can be integrated with OpenNebula by installing a the specific plug-in. The versions supported by this plug-in are 2.2, 3.x. It is important to note that the integration is not made at the scheduler level so CLUES makes use of the hooking system to intercept the VM requests made to OpenNebula.

When CLUES is working, some parameters should be tuned (e.g. frequency of checking the state, time to consider that a node is idle, etc.) to coordinate the VM scheduling policies of the site and the energy saving techniques.

Both CLUES ant the OpenNebula plug-in are avaliable for download at the developer’s website and it is distributed as open source under the GPL v3.0 license.

With Open vSwitch (OVS) support in OpenNebula it was natural to work on using OpenFlow to enable advanced network services. With OVS support the door was opened for a programmable network and a merge of OpenNebula networking paradigm with the area of software defined networking.

OpenFlow is a network controller that OVS switches can use to enforce network rules. The rules control packet flows instead of individual packets and can be created at any layer of the networking stack.

In our recent work we deployed an OpenFlow controller (NOX) on our cloud at Clemson University. The Onecloud is made of a couple KVM hypervisors that use OVS switches and a single NOX controller. The neat features that we implemented are an implementation of Amazon’s Elastic IP and Security Groups. We modified the econe server in OpenNebula to expose the EC2 query API for these services and we wrapped an xml-rpc server on top of NOX to directly set the proper network rules in the controller, which in turn forwarded them to the OVS switches on the hypervisors. The figure below summarizes this setup:

What are these rules? For elastic IP, these rules just re-write all packets at L3 and answer to ARP requests so that a public IP leased via OpenNebula can be associated with an instance previously started on a different vnet. For security groups, more changes were required on the OpenNebula side, like an extension of the database tables, but the rules are now at L4, allowing flows to a specific port of an instance.

The end result is that you can now do something like this (using boto) with an OpenNebula cloud:

#Lease and IP address
address = conn.allocate_address()

# Associate IP with VM
conn.associate_address(instance.id, address.public_ip)

# Remove Elastic IP from the instance it is associated with
conn.disassociate_address(addres)

# Return the Elastic IP address back to address pool
conn.release_address(address)

For security groups, if one started an instance with all ports blocked, to open port 22 one would proceed exactly the same way than with Amazon EC2:

# Start instance in a security group
reservation = image.run(instance_type="m1.small", security_groups=["sg-000007"])

# Create security group
sg = conn.create_security_group("Test Group", "Description of Test Group")

# Allow access from anyone for HTTP
sg.authorize("tcp", 80, 80, "0.0.0.0", None)

# Allow SSH access from private subnet
sg.authorize("tcp", 22, 22, "10.10.1.0/24", None)

The modified NOX controller is available on Google code, and the modified econe will be available in Opennebula 3.4. The Onecloud has nice screencasts of using these new OpenNebula enhancements

Sebastien Goasguen, Greg Stabler, Aaron Rosen and K. C Wang

Clemson University

The OpenNebula Project is pleased to announce the release, as a new plugin in the ecosystem, of the OpenNebula 3.2 drivers to build clouds on Microsoft Hyper-V. From the release in October 2011 of a first integration prototype, we have been working with some of our users to improve the stability of the integration prototype and to incorporate more functionality. In February 2012, we released a development version with enhanced performance and scalability thanks to its integration with technologies commonly available in Windows environments, like Windows Remote Management. This latest release of the Hyper-V drivers for OpenNebula 3.2 additionally brings new features, such us direct connection to Windows Servers nodes without requiring a proxy machine, improvement of CDROM contextualization mechanism, and support for SCSI hard disks.

The new components are available for download under the Apache license from the Hyper-V page of the OpenNebula ecosystem. The OpenNebula project provides support for the deployment and tuning of the new drivers through its ecosystem mailing list.

This development is a new result of our collaboration with Microsoft on innovation and interoperability in cloud computing. In January 2012, we announced the continuation of this collaboration aimed at bringing the existing OpenNebula HyperV interoperability to a more stable version and to enhance its features.

As we are quickly approaching the Easter holidays, the next release of OpenNebula (3.4 codename Wild Duck) is getting in shape. This new release is focused on extending the storage capabilities of OpenNebula. Wild Duck will include support for multiple Datastores, overcoming the single Image Repository limitations in previous versions.

A Datastore is any storage medium (typically SAN/NAS servers) used to store disk images for VMs. The use of multiple Datastores will help you in planning your storage by:

  • Balancing I/O operations between storage servers
  • Setting different storage performance features for different VM types or users
  • Defining different SLA policies (e.g. backup) for different VM types or users

Wild Duck will be shipped with 4 different datastore types:

  • File-system, to store disk images in a file form. The files are stored in a directory mounted from a SAN/NAS server.
  • iSCSI/LVM, to store disk images in a block device form. Images are presented to the hosts as iSCSI targets
  • VMware, a datastore specialized for the VMware hypervisor that handle the vmdk format.
  • Qcow, a datastore specialized to handle qemu-qcow format and take  advantage of its snapshoting capabilities

As usual in OpenNebula, the system has been architected to be highly modular and hackable, and you can easily adapt the base types to your deployment.

The Datastore subsystem is fully integrated with the current authorization framework, so access to a given datastore can be restricted to a given group or user. This enables the management of complex multitenant environments.

Also, by popular request, we are bringing back the Cluster abstraction. The Clusters are a logical set of physical resources, namely: hosts, networks and datastores. In this way you can better plan your capacity provisioning strategy by grouping resources into clusters.

This release also includes important contributions from our user community, specially from Research in Motion (support for qcow datastores),  Logica (extended support for EC2 hybrid set up’s) and Terradue 2.0 (VMWare based datastores). THANKS!

Here’s our monthly newsletter with the main news from the last month, including what you can expect in the coming months.

Technology

Following our rapid release cycle, a pre-release of OpenNebula 3.4 is now available. This pre-release solves minor issues in several OpenNebula components and includes some new features, specially in Sunstone and in the cloud servers (EC2 and OCCI). For a more detailed list of changes, take a look at the release notes. The final release of OpenNebula 3.4 will feature multiple data-stores and several new back-ends for storage. Some of this is already in the repository although it has not been included in this pre-release. We will be releasing OpenNebula 3.4 Beta on the 16th of March… so stay tuned!

We released a new version of the OpenNebula drivers to build clouds on Microsoft Hyper-V. The main aim of this new release is to enhance the performance and scalability of the drivers and to simplify its deployment by leveraging technologies commonly available in Windows environments, like Windows Remote Management. This release also updates the drivers to work with the latest stable version of OpenNebula (3.2). You can find more technical details in the Hyper-V page of the OpenNebula ecosystem. The release of this new driver was announced in CloudScape IV and a final version of this drivers will be released in few weeks.

The OpenNebula driver in Deltacloud has been updated to interact with OpenNebula 3.x clouds. If you want to test it, we added a Howto to our wiki showing how to interact with OpenNebula using Deltacloud, and you can also test it with the OpenNebula Public Cloud.

We added some new documentation resources that may be of use to some of our users:

Community

The big news in the community this month was that OpenNebula will be part of the Helix Nebula partnership, a consortium of leading IT providers and three of Europe’s biggest research centres (CERN, EMBL and ESA) launching a European cloud computing platform to enable innovation for science and to create new commercial markets. In the science research area, OpenNebula is used by the leading supercomputing centers, research labs, and e-infrastructures to build and manage their own cloud computing systems.

During this month we also updated our list of contributors to include people from FermiLab, Harvard, Sandia Labs, CERN, IBM, Logica, Puppet, RIM, and many others (if your name is missing from the list, please contact us). We also added several companies and projects to our list of featured users: CITEC, LibreIT, Tokyo Institute of Technology, CloudWeavers, IBERGRID, MeghaCloud, NineLab, ISMB , RENKEI, BrainPowered… If you would like to be added to the list, please take a moment to fill out our OpenNebula User Survey

We have recently received an important contribution from Research in Motion: they have contributed new image/transfer drivers for qcow2 and multiple datastores, which will be part of the upcoming OpenNebula 3.4.

Fedora 17 Alpha has been released featuring OpenNebula 3.2.1 (thanks to Shawn Starr), and the OpenNebula package in Debian has been updated to 3.2.1 (thanks to Damien Raude-Morvan).

Valentin Bud organized a Cloud Computing and OpenNebula workshop in Timisoara, Romania, on February 16th. Valentin is hoping to organize more workshops like this so, if you live in Romania, make sure you follow his workshop Facebook page.

Finally, we would also like to point out that there was recently a very interesting thread in our user mailing list where the gurus of the of the community exchanged their experiences when using different storage subsystems: GlusterFS, GPFS, Lustre, MooseFS, XtreemFS…

Outreach

CloudScape IV was an opportunity to present how OpenNebula is helping to unleash the innovation of cloud computing and to see the wide adoption of OpenNebula in leading international projects working in cloud computing innovation and interoperability in the area of research infrastructures. Projects like VenusC, BonFIRE, EGI, or StratusLab presented their use of OpenNebula, and how its standard APIs are helping them to offer interoperability and portability to their consumers.

FOSDEM was a great opportunity to get feedback from the community, and to meet with the people behind the projects we collaborate with, Deltacloud, CompatibleOne, Xen Cloud Platform, ….

We will be giving an intensive tutorial on basic and advance usage and configuration of the new OpenNebula 3.2.1 at the Open Source Datacenter Conference (OSDC 2012) to take place in Nuremberg, Germany, on the 25th and 26th of April 2012. Preregistration for the workshop is needed.

Remember that you can see slides and resources from past events in our Outreach page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.

The OpenNebula project will be giving an intensive tutorial on basic and advance usage and configuration of the new OpenNebula 3.2.1 at the Open Source Datacenter Conference (OSDC 2012) to take place in Nuremberg, Germany, on the 25th and 26th of April 2012. Preregistration for the workshop is needed.

Complementing the workshop, a 1-hour presentation will be given stating the latest features of OpenNebula, including an overview of it design principles, the history of the OpenNebula project and some other curiosities.

We are pleased to announce that OpenNebula is part of the just announced Helix Nebula partnership, a consortium of leading IT providers and three of Europe’s biggest research centres (CERN, EMBL and ESA) launching a European cloud computing platform to enable innovation for science and to create new commercial markets.

The biggest names in the ICT industry have come together to offer a range of services in an open standards-based framework addressing European data privacy concerns on a large-scale. The partners are Atos, Capgemini, CloudSigma, Interoute, Logica, Orange Business Services, SAP, SixSq, Telefonica, Terradue, Thales, The Server Labs, T‑Systems, the Cloud Security Alliance, the OpenNebula Project, and the European Grid Infrastructure. They will all work together to establish a federated and secure high-performance computing cloud platform to initially address the massive data and compute needs of flagship use cases from CERN, ESA and EMBL.

This announcement consolidates OpenNebula as the leading open-source solution for cloud computing innovation and interoperability. We are very proud to add Helix Nebula to the list of featured initiatives and infrastructures using OpenNebula. In the science research area, OpenNebula is used by the leading supercomputing centers, research labs, and e-infrastructures to build and manage their own cloud computing systems.

You can find more details about “Helix Nebula – The Science Cloud” in the official press release “Big science teams up with big business to kick-start European cloud computing market” at CERN, ESA and EMBL sites.