OpenNebula Implements the OGF Open Cloud Computing Interface Draft Specification

Last Friday, the OpenNebula project announced the implementation of the OGF OCCI draft specification.  The release, that will be part of OpenNebula 1.4,  includes a server implementation, clients command for using the service and enabling access to the full functionality of the OCCI interface, and several supporting documents. The last version of this open source toolkit for cloud computing, available for download in beta release, also brings libvirt, EC2 Query API, and a powerful CLI, and all of them can be used on the same OpenNebula instance, so users can use their favorite interface. In fact, OpenNebula brings support to develop other Cloud interfaces. Moreover all those interfaces can be used on any of the virtualization technologies supported, Xen, KVM and VMware.

The Open Grid Forum (OGF) Open Cloud Computing Interface  (OCCI) Working Group was officially launched in April 2009 to deliver an interface specification for managing cloud infrastructure services, also known as Infrastructure as a Service or IaaS. This specification is being driven by the requirements in several use cases. The document Requirements and Use Cases for a Cloud API records the needs of IaaS Cloud computing managers and administrators in the form of Use Cases.

In the last days there has been an intensive discussion on the topic of IaaS Cloud interfaces. There are now three main players in the arena, the Amazon EC2 API, supported by the most well-known cloud computing provider, the VMware vCloud API, supported by the leader in virtualization and submitted to DMTF, and the OGF OCCI API, being defined by an open community in the Open Grid Forum. OpenNebula now implements two of them, EC2 and OCCI, and there is interest in the OpenNebula community in implementing the third interface, vCloud (after all, OpenNebula 1.4 supports VMware). However, the interest of OpenNebula as open-source community is not only to implement an interface specification controlled by a company, but also to contribute to its definition by providing feedback and playing an active role in subsequent versions. In this sense, OCCI-WG is the only open standard sanctioned by a standards body.

While some existing open-source technologies are just implementations of commercial products and interfaces, others open-source technologies such as OpenNebula, are powerful tools for innovation. A Cloud technology should not only be the implementation of an interface, standardized or not. OpenNebula, as technology being developed in the context of RESERVOIR European flagship project in cloud computing, provides many unique capabilities for the scalable and efficient management of the data center infrastructure. Those are the real differentiation in the cloud and virtualization market.

Ignacio Martin Llorente

OpenNebula Supports the Amazon EC2 Query API on VMware-based Cloud Infrastructures

This is the first post I am writing to illustrate the main novelties of the new version of the OpenNebula Virtual Infrastructure Manager. OpenNebula is an open-source toolkit for building Public, Private and Hybrid Cloud infrastructures based on Xen, KVM and VMware virtualization platforms. OpenNebula v1.4 is available in beta release, incorporating bleeding edge technologies and innovations in many areas of virtual infrastructure management and Cloud Computing.

While previous versions concentrated on functionality for Private and Hybrid Cloud computing, this new version incorporates a new service to expose Cloud interfaces to Private or Hybrid Cloud deployments, so providing partners or external users with access to the private infrastructure, for example to sell  overcapacity. The new version brings a new framework to easily develop Cloud interfaces, and implements as example a subset of the Amazon EC2 Query API. The OpenNebula EC2 Query is a web service that enables users to launch and manage virtual machines in an OpenNebula installation through the Amazon EC2 Query Interface. In this way, besides the Openebula CLI or the new libvirt interface, users can use any EC2 Query tool or utility to access your Private Cloud.

The OpenNebula team is also developing the RESERVOIR Cloud interface and is planning to develop the OGF OCCI API. Moreover, as it is stated in its Ecosystem page, the team will also collaborate with IaaS Cloud providers interested in an open-source implementation of their Cloud interface to foster adoption of their Cloud services.

Other new interesting feature is the support for VMware. The VMware Infrastructure API provides a complete set of language-neutral interfaces to the VMware virtual infrastructure management framework. By targeting the VMware Infrastructure API, the OpenNebula VMware adaptors are able to manage various flavors of VMware hypervisors: ESXi, ESX and VMware Server.

The combination of both innovations allows the creation of a Cloud infrastructure based on VMware that can be interfaced using Amazon EC2 Query API. I will cover more unique features and capabilities in upcoming posts.

Ignacio Martín Llorente

Libvirt 0.6.5 released… including a OpenNebula driver

Libvirt version 0.6.5 was released last week with a number of bug fixes and new features. The complete list of changes can be viewed at the libvirt web site. This new release includes an OpenNebula driver that provides a libvirt interface to an OpenNebula cluster.

What is it? OpenNebula is a Virtual Infrastructure Manager that controls Virtual Machines (VM) in a pool of distributed resources by orchestrating network, storage and virtualization technologies. The OpenNebula driver lets you manage your private cloud using a standard libvirt interface, including the API as well as the related tools (e.g. virsh) and VM description files.

Why a libvirt interface for your private cloud? Libvirt is evolving into a very rich and widely used interface to manage the virtualization capabilities of a server, including virtual network, storage and domain management. So, libvirt can be a very effective administration interface for a private cloud exposing a complete set of VM and physical node operations. In this way, libvirt + OpenNebula provides a powerful abstraction for your private cloud. More on interfaces for Private Clouds in this post…

Ruben S. Montero

Interfaces for Private and Public Cloud Computing

An entire ecosystem is evolving around cloud computing. Interface standardization efforts, commercial products, cloud infrastructure and management services, virtual appliance providers and open-source solutions are filling niches in the cloud ecosystem. The role and position of a component or a service in the ecosystem are defined by its capabilities, the consumers of those capabilities and its relationship with other components and services.

This article presents public and private cloud computing from the perspective of their different application scope and interfaces.

Interfaces for Public Cloud Computing

Public or external clouds offer virtualized resources as a service, enabling the deployment of an entire IT infrastructure without the associated capital costs, paying only for the used capacity. Amazon EC2, ElasticHosts, GoGrid and FlexiScale are examples of commercial cloud providers of elastic capacity, offering a public interface for remote management of virtualized server instances within their proprietary infrastructure. With the growing popularity of these cloud offerings, an ecosystem of tools is emerging that can be used to transform an organization’s existing infrastructure into a public cloud. Technologies, such as Globus Nimbus or Eucalyptus, provide an open-source implementation of cloud-like public interfaces, and projects, such as RESERVOIR, are developing open-source toolkits for building any cloud architecture.

The standardization of a public cloud interface is the aim of the OGF Open Cloud Computing Interface Working Group. OCCI-WG is delivering an API specification for remote management of cloud computing infrastructure, allowing for the development of interoperable tools for common tasks on public clouds including deployment, autonomic scaling and monitoring. Main consumers of this API would be service management platforms, technologies for building hybrid clouds, or service providers. The working group keeps a complete list of existing cloud APIs and a list of references to studies comparing the APIs. The requirements for the new specification are being extracted from a collection of use cases contributed by the community. The working group is being supported by relevant companies and open-source initiatives in the cloud computing ecosytem.

Interoperability is not only about standardization of interfaces, but also about portability of virtual machines. The DMTF Open Virtualization Format (OVF) can be used as a means for customers of an IaaS provider to express their infrastructural needs. OVF was not designed with cloud computing in mind, so there are issues that need to be solved when applied to this environment, in particular, on automatic elasticity, self-configuration and deployment constraints. In any case, standards for cloud interoperability (OCCI) and virtual machine portability (OVF) are imminent and many providers are planning to adopt them.

Interfaces for Private Cloud Computing

On the other hand, there is a growing interest in tools for leasing compute capacity from the local infrastructure. The aim of these deployments is not to expose to the world a cloud interface to sell capacity over the Internet, but to provide local users with a flexible and agile private infrastructure to run service workloads within the administrative domain. This private or enterprise cloud model is not new, since datacenter management has been around for a while. In fact, I would venture that  future datacenters will look like private clouds.  Platform VM Orchestrator, VMware VSphere, Citrix Cloud Center, and Red Hat Enterprise Virtualization Manager are commercial tools for managament of virtualized services on the datacenter, so aimed at building private clouds. OpenNebula Virtual Infrastructure Engine (now part of Ubuntu) is an open-source alternative for private cloud computing, also supporting hybrid cloud deployments to supplement local infrastructure with computing capacity from an external cloud.

Private cloud interfaces should so allow the integration of the virtualized distributed infrastructure in the data-center management stack, including user and administration support. A private cloud interface should provide rich enough semantics, far beyond of that provided by public clouds, to ease this integration. Such interface should provide additional functionality for virtualization, networking, image and physical resource configuration, management, monitoring and accounting, not exposed by pubic cloud interfaces.

The standardization of a private cloud interface may be the aim of the new DMTF Cloud Computing Incubator, given that, according to its charter, one of its benefits is to enable the use of cloud computing within enterprises. The DMTF Open Cloud Standards Incubator Leadership Board currently includes most of main providers and integrators of private cloud solutions. On the other hand, although conceived as a library to interface with different virtualization technologies, the libvirt virtualization API could be also used as interface for private cloud computing. This is the approach represented by the libvirt implementation of OpenNebula. The implementation of libvirt on top of a virtual infrastructure manager provides an abstraction of a whole cluster of resources (each one with its hypervisor), so a whole cluster can be managed as any other libvirt node.

About Using Public Interfaces for Private Cloud Deployments

The usage of public cloud interfaces to access the local infrastructure would reduce the cost of learning a new interface when moving from a private to a public; but at the expense of providing local users with limited functionality, losing the comfort and control of data center operations, and using, within the administration domain, communication protocols and security mechanisms originally created for remote management. Moreover, several local cloud technologies support cloudbursting to build hybrid clouds, so combining local infrastructure with public cloud-based infrastructure and enabling highly scalable hosting environments.

That does not mean, of course, that you can not expose a public interface on top of your private cloud solution. For example if you want to provide partners or external users with access to your infrastructure, or to sell your overcapacity. Obviously, a local cloud solution is the natural back-end for any public cloud.

Ignacio M. Llorente

Building Private and Hybrid Clouds with Ubuntu 9.04

Ubuntu 9.04 (Jaunty Jackalope) has been released today bringing highly interesting new features, specially in the Cloud Computing and Virtualization area. The new Ubuntu server distribution includes two complementary cloud tools, OpenNebula and Eucalyptus, so providing the technology required to build the three types of Cloud architectures, namely private, hybrid and public clouds.

Eucalyptus can be used to transform an existing infrastructure into an IaaS public cloud, being compatible with Amazon’s EC2 interface. Eucalyptus is fully functional with respect to providing cloud-like interfaces and higher-level cloud functionality for security, contextualization and image management. OpenNebula, on the other hand, is a virtual infrastructure engine that enables the dynamic and scalable deployment and re-placement of groups of interconnected virtual machines within and across sites. OpenNebula can be primarily used as a virtualization tool to manage a distributed virtual infrastructure in the datacenter or cluster. This application is usually referred as private cloud, and  OpenNebula can also dynamically scale the local infrastructure using external clouds, so building hybrid clouds. OpenNebula provides dynamic “cloudbursting” to any cloud with Amazon EC2 interfaces, including Eucalyptus-based clouds.

OpenNebula is building an ecosytem with tools extending its functionality, such as the Haizea lease management system, a libvirt implementation on top of OpenNebula or a VM consolidation scheduler fro GreenIT. The project provides support to host the development of the new ecosystem projects.

Moreover, because OpenNebula is one of the technologies being enhanced in RESERVOIR, flagship European research initiative in virtualized infrastructures and cloud computing, in few months there will be available several new components complementing its functionality for service elasticity management, VM placement to meet SLA commitments, supporting public cloud interfaces…

Ignacio M. Llorente

Interoperation between Cloud Infrastructures

A Distributed Virtual Infrastructure (VI) Manager is responsible for the efficient management of the virtual infrastructure as a whole, by providing functionality for deployment, control and monitoring of groups of interconnected Virtual Machines (VMs) across a pool of resources. An added functionality of these management tools is the dynamic scaling of the virtual infrastructure with resources from remote providers, so seamless integrating remote Cloud resources with in-house infrastructures. This novel functionality allows to add and remove capacity in order to meet peak or fluctuating service demands, so providing the foundation for interoperation between Cloud infrastructures. The distributed virtual infrastructure would run on top of a geographically distributed physical infrastructure consisting of resources from the private cloud and several external cloud providers.

Following the terminology defined by the Grid community for getting Grids to work together, we use the term interoperation for the set of techniques to get production Cloud infrastructures to work together using adapters and gateways. While interoperability would refer to the ability of Cloud infrastructures to interact directly via common open standards.

Since release 1.0, OpenNebula distribution includes the plugins required to supplement local resources with Amazon EC2 resources to satisfy peak or fluctuating demands. This novel feature has been illustrated in several use cases for computing clusters and web servers. The open and flexible architecture of OpenNebula makes quite simple to create new plugins to access other cloud providers. In order to illustrate this and to provide KVM users with an utility access to remote resources, the OpenNebula team has just released the plugins required to dynamically grow the infrastructure using ElasticHosts resources. ElasticHosts is the world’s first public cloud based upon KVM, providing scalable and flexible virtual server capacity for cloud hosting. An interesting result is that a private infrastructure could dynamically grow using resources from different Cloud providers according to provisioning policies based on resource availability, performance, costs, availability…

The release of these new plugins represents a new step towards an open-source framework for cloud infrastructure federation, which is one of the main goasl of the Reservoir project, European research initiative in virtualized infrastructures and cloud computing.

Ignacio Martín Llorente

Release of OpenNebula Cloud Plug-in for ElasticHosts

The OpenNebula Team is releasing a new plug-in to interface the ElasticHosts cloud provider, so it can be used to dynamically increase capacity of your virtualized infrastructure to meet fluctuating or peak demands. This can happen when the local fabric runs out of capacity to spawn a new virtual machine, therefore it may be interesting to add capacity using cloud providers.

Cloud bursting with OpenNebula and ElasticHosts

ElasticHosts offers KVM based virtualized hosts in a cloud like fashion, i.e., à la Amazon EC2, using a very neat RESTful API. Uploading images (drives, in ElasticHosts speak) previously configured with the service that needs to meet a increased demand would allow the cloudbursting described above through OpenNebula.

Information on how to download and install the ElasticHosts plug-in can be found in the OpenNebula Trac.

Tino Vazquez

Set Up a Private Cloud in 5 Steps with Ubuntu and OpenNebula

So, do you want to transform your rigid and compartmented infrastructure, into a flexible and agile platform where you can dynamically deploy new services and adjust their capacity?. If the answer is yes, you want to build what is nowadays called a private cloud.

In this mini-howto, you will learn to setup such private cloud in 5 steps with Ubuntu and OpenNebula. We assume that your infrastructure follows a classical cluster-like architecture, with a front-end (cluster01, in the howto) and a set of worker nodes (cluster02 and cluster03).

First you’ll need to add the following PPA if you’re running Ubuntu 8.04 LTS (Hardy) or Ubuntu 8.10 (Intrepid), you should be fine if you are running Ubuntu 9.04 (Jaunty Jackalope):

deb http://ppa.launchpad.net/opennebula-ubuntu/ppa/ubuntu intrepid main

If everything is set up correctly you could see the OpenNebula packages:

$ apt-cache search opennebula
libopennebula-dev - OpenNebula client library - Development
libopennebula1 - OpenNebula client library - Runtime
opennebula - OpenNebula controller
opennebula-common - OpenNebula common files
opennebula-node - OpenNebula node

OK, then. Here we go:

  1. [Front-end (cluster01)] Install the opennebula package:
    $ sudo apt-get install opennebula
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following extra packages will be installed:
      opennebula-common
    The following NEW packages will be installed:
      opennebula opennebula-common
    0 upgraded, 2 newly installed, 0 to remove and 2 not upgraded.
    Need to get 280kB of archives.
    After this operation, 1352kB of additional disk space will be used.
    Do you want to continue [Y/n]?
    
    ...
    Setting up opennebula-common (1.2-0ubuntu1~intrepid1) ...
    ...
    Adding system user `oneadmin' (UID 107) ...
    Adding new user `oneadmin' (UID 107) with group `nogroup' ...
    Generating public/private rsa key pair.
    ...
    Setting up opennebula (1.2-0ubuntu1~intrepid1) ...
    oned and scheduler started

    As you may see from the output of the apt-get command performs several configuration steps: creates a oneadmin account, generates a rsa key pair, and starts the OpenNebula daemon.

  2. [Front-end (cluster01)] Add the cluster nodes to the system. In this case, we’ll be using KVM and no shared storage. This simple configuration should work out-of-the-box with Ubuntu:
    $ onehost add cluster02 im_kvm vmm_kvm tm_ssh
    Success!
    ...
    $ onehost add cluster03 im_kvm vmm_kvm tm_ssh
    Success!
    ...
    

    Now, we have just to follow the wise output of the previous commands

  3. [Front-end (cluster01)] You need to add the cluster nodes to the known hosts list for the oneadmin user:
    sudo -u oneadmin ssh cluster02
    The authenticity of host 'cluster02 (192.168.16.2)' can't be established.
    RSA key fingerprint is 37:41:a5:0c:e0:64:cb:03:3d:ac:86:b3:44:68:5c:f9.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'cluster02,192.168.16.2' (RSA) to the list of known hosts.
    oneadmin@cluster02's password:
    

    You don’t need to actually login.

  4. [Worker Node (cluster02,cluster03)] Install the OpenNebula Node package:
    $ sudo apt-get install opennebula-node
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following extra packages will be installed:
      opennebula-common
    The following NEW packages will be installed:
      opennebula-common opennebula-node
    ...
    Setting up opennebula-node (1.2-0ubuntu1~intrepid1) ...
    Adding user `oneadmin' to group `libvirtd' ...
    Adding user oneadmin to group libvirtd
    Done.

    Note that oneadmin is also created at the nodes (no need for NIS here) and added to the libvirtd, so it can manage the VMs.

  5. [Worker Node (cluster02,cluster03)] Trust the oneadmin user at the ssh scope, just copy the command from the onehost output ;) :
    $ sudo  tee /var/lib/one/.ssh/authorized_keys << EOT
    > ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAm9n0E4bS9K8NUL2bWh4F78LZWDo8uj2VZiAeylJzct
    ...7YPgr+Z4+lhOYPYMnZIzVArvbYzlc7HZxczGLzu+pu012a6Mv4McHtrzMqHw== oneadmin@cluster01
    > EOT
    

    You may want to check that you can login the cluster nodes from the front-end, using the oneadmin account without being asked for a password:

    $ sudo -u oneadmin ssh cluster02

You are done! You have your own cloud up and running! Now, you should be able to see your nodes ready to start VMs:

$ onehost list
 HID NAME                      RVM   TCPU   FCPU   ACPU    TMEM    FMEM STAT
   0 cluster02                   0    100    100    100  963992  902768   on
   1 cluster03                   0    100    100    100  963992  907232   on

You may want to check the OpenNebula documentation for further information on configuring the system or to learn how to specify virtual machines.

Ruben S. Montero

When 8 are 26 (or even more…)

Nowadays it is difficult to find a place where virtualization benefits are not praised. One of my favorite ones is consolidation, that has finish with the “one application one server” paradigm. However, when you start placing your services on different hosts, you will find quickly that the new model does not scale per se. Soon (ten VMs is enough), you’ll be trying to answer questions like: Where did I put our web server? Is MAC address already in use? Is this the updated image for the ubuntu server? and the alike.

That was our situation last year; although we organized ourselves with a wiki and a database to store all the information, it became evident that we need a management layer on top of the hypervisors. We started the development of OpenNebula to implement our ideas about the distributed management of Virtual Machines.

Today OpenNebula is a reality, and has reach a production quality status (at least we are using it in production). I’ll briefly show you how we setup our infrastructure with OpenNebula, KVM and Xen.

The Physical Cluster

Our infrastructure is a 8-blade cluster, each one with 8GB and two quad-core CPUs (8 cores/blade). These blades are connected with a private LAN an to the Internet, so they can host public services. The output of the onehost list command is as follows – it’s been colorized for dramatic purposes ;) :

Originally the blades were configured as a classical computing cluster.

The Virtual Infrastructure

Now, thanks to OpenNebula the cluster can be easily configured to host multiple services. For example we are using the 8 blades for:

  • Two grid clusters (4 nodes each) for our grid projects. Needless to say, that we can add/remove nodes to these clusters as we need them
  • A Grid broker based on the Gridway metacheuler
  • A web server, serving this page actually
  • A coupe of teaching clusters for our master students, so they can break anything
  • A cluster devoted to the RESERVOIR project
  • And my favorite, a cluster were we develop OpenNebula. The nodes of this cluster are Xen capable, and they run as KVM virtual machines. So yes, we are running VMs in Xen nodes which are KVM VMs.

This is the output of the onevm list command:

One of the most useful features of the new version of OpenNebula is the networking support. You can define several networks, and OpenNebula will lease you MAC addresses from the network you want. So you do not have to worry about colliding addresses. The MAC addresses are assigned using a simple pattern so you can derive the IP from the MAC (see the OpenNebula Networking guide for more info).

We are using a virtual network for each virtual cluster, output from onevnet list and onevnet show commands:

Thanks to the hypervisors and OpenNebula, we can shape our 8 blades to be any combination of services, and we can resize these services to suite our needs. As can be seen above, you can easily find where is the webserver running, so you can get a console in case you need one, or migrate the VM to other blade…

Although the management is now so easy with OpenNebula, setting up everything (installing the physical blades, the VMs, configure the network…) is a difficult task. Kudos to Javi and Tino for getting the system running!.

Ruben S. Montero