C12G Labs has just announced an update release of OpenNebulaPro, the enterprise edition of the OpenNebula Toolkit. OpenNebula 3.2, released two weeks ago, brings important benefits to cloud providers with a new easily-customizable self-service portal for cloud consumers, and builders with full support for VMware that now includes live migration, advanced contextualization and image management. The new release additionally included important enhancements in networking and security.

C12G delivers OpenNebulaPro for business, government, or other organizations looking for a hardened, certified, supported cloud platform. OpenNebulaPro combines the rapid innovation of open-source with the stability and long-term production support of commercial software. Compared to OpenNebula, the expert production and integration support of OpenNebulaPro and its higher stability increase IT productivity, speed time to deployment, and reduce business and technical risks.

OpenNebulaPro 3.2 integrates the most recent stable version of OpenNebula 3.2 with the bug, performance, and scalability patches developed by the community and by C12G for its customers and partners. Supported Linux distributions are RedHat Enterprise Linux, Suse Linux Enterprise, CentOS, Ubuntu, Debian and OpenSuse; supported hypervisors are VMware, KVM and Xen; and the supported cloud provider for cloudbursting is Amazon Web Services. OpenNebulaPro 3.2 is provided under open-source license to customers and partners on an annual subscription basis through the OpenNebula.pro Support Portal.

C12G offers a 30-day, no-cost, and no-commitment trial of OpenNebulaPro with the services to assess its suitability and performance in your environment.


The OpenNebula Project is pleased to announce the continuation of its collaboration with Microsoft on innovation and interoperability in cloud computing. The OpenNebula Project and Microsoft started to collaborate in Summer 2011 aimed at adding and mantaining Hyper-V on the list of officially supported hypervisors. A first collaboration was announced in September 2001. As a result,  in October 2011, we released under the Apache license a development version of the new plug-ins to build clouds on Microsoft Hyper-V.

The aim of this second collaboration is to bring the existing prototype to a more stable version and to enhance its features. Since the release of the first prototype, the OpenNebula project has provided support for the deployment and tuning of the new drivers to several users. These users have provided relevant feedback on functionality, stability and performance that will be addressed in the new version. You can find more technical details in the Hyper-V page of the OpenNebula ecosystem.

Great news!, and stay tuned, the new version will be ready in Q1 2012.

January 30th, 2012. After 2 weeks from OpenNebula 3.2 release, the OpenNebula project announces the general availability of OpenNebula 3.2.1. This is a maintenance release that collects all the great feedback received since 3.2 release.

This release only includes bug fixes (check here for a list of issues solved) and is a recommended update for everyone running any 3.x.

Check out the OpenNebula 3.2 release notes for the release highlights and a summary of the new features incorporated in OpenNebula 3.2.

Relevant Links

The OpenNebula Team

The OpenNebula Cloud offers a virtual computing environment accessible through two different remote cloud interfaces, OCCI and EC2, and  two different web interfaces,  Sunstone for cloud administrators and the new SelfService for cloud consumers. These mechanisms access the same infrastructure, i.e. resources created by any of the mentioned methods will be instantly available on the others. For instance, you can create a VM with the OCCI interface, monitor it with the EC2 interface, and shut it down using the OpenNebula Sunstone web interface.

This Cloud has been migrated to the last OpenNebula version, 3.2. If you have an account you can still use your old username and password. If not, request a new account and check out the new OpenNebula 3.2 features. These interfaces will show you the regular user view of the Cloud, but you will not be able to manage ACLs, hosts, groups nor users, since that will be delegated to the oneadmin group.

A new episode of the screencast series is now available at the OpenNebula YouTube Channel.

This screencast, second part of the oZones screencast, shows how to manage and use Virtual Data Centers, both with the oZones CLI and with the oZones web-based interface, to isolate virtual infrastructure environments within an OpenNebula zone. It shows how to create a VDC by assigning a group of users to a group of physical resources and by granting one of the users, the VDC administrator, with privileges to manage all virtual resources in the VDC. The users in the VDC, including the VDC administrator, only see the virtual resources and not the underlying physical infrastructure, and can create and manage virtual compute, storage and networking capacity.

Enjoy the screencast!

January 17th, 2012. The OpenNebula project is happy to announce the availability of the stable release of OpenNebula 3.2. This release of OpenNebula features important improvements in security, networking and user management. It also fully integrates C12G addons, previously only available for OpenNebulaPro customers.

As main new features, OpenNebula 3.2 incorporates an easily-customizable self-service portal for end-users that greatly simplifies VM provisioning in the data center. This new update of OpenNebula also brings the highest levels of flexibility, stability, scalability and functionality for VMware-based data centers and clouds in the open-source domain. OpenNebula 3.2 provides an open management platform that compares to vCenter and vCloud, that can moreover be adapted to fit into your environment.

As usual OpenNebula releases are named after a Nebula. The Red Spider Nebula (NGC 6537) is a bipolar planetary nebula in the constellation Sagittarius.

Highlights of OpenNebula 3.2

Notable improvements include, but are not limited to:

  • VMware, out-of-the-box support for VMware that now includes live migration, advanced contextualization, image and network management.
  • Self-Service Portal, a new easy-to-use web-based end-user interface that complements the existing GUIs for the operation of the data-center (OpenNebula Sunstone) and for the management of multiple zones (OpenNebula Zones).
  • User & Group Management, to easily share virtual resources with other users and groups.
  • Improved Security, that fixes security issues and incorporates new authentication drivers and performance improvements.
  • Networking Drivers, a new set of drivers are now available to perform networking setup operations.
  • Data Center Placement Policies, placement policies can be defined globally to optimize the resources of the datacenter. There are 4 predefined policies: packing, striping, load-aware, and custom.

Relevant Links

The OpenNebula Team

Work done by Debasis Roy Choudhuri, Bharat Bagai, Joydipto Banerjee, Udaya Keshavadasu, Rajeev D Samuel, Mitesh Chunara & Krishna Singh at the Business Application Modernization (BAM) Department of IBM India.

In our previous post, we had shown how to implement Cloud management with OpenNebula in a nested VMware environment. That is mostly a Cloud administration work. In this blog, we will focus more from the end users’ point of view. This exercise was also done at the Business Application Modernization (BAM) department of IBM India.

Scope

The goal was to setup a self-service portal based on EC2 query interface from where Cloud users can provision and launch various images that are available. Also users can avail the Public Cloud services of Amazon.

Implementation

To test this scenario we can use either HybridFox or ElasticFox plug-ins. In our scenario, we used HybridFox version 1.7.000119 on client end with Mozilla browser. On FrontEnd machine, you have to install the pre-requisite called ‘gems’ to access amazon-ec2 like interface. Later on with the help of this interface you can connect to Amazon Web Services. There will be certain changes in configuration files that you have to perform on FrontEnd machine.

  • File econe.conf:
    :one_xmlrpc: http://localhost:2633/RPC2
    :server:
    :port: 4567
    :auth: ec2
    :instance_types:
    :m1.small:
    :template: m1.small.erb
  • File EC2QueryClient.rb: Verify that Signature Method refers to ‘HmacSHA256’
  • File EC2CloudAuth.rb:
    # Calculates signature version 1
    def signature_v1(params, secret_key, digest='sha1')
    params.delete('Signature')
    + params.delete(:econe_host)
    + params.delete(:econe_port)
    req_desc = params.sort {|x,y| x[0].downcase <=> y[0].downcase}.to_s
    digest_generator = OpenSSL::Digest::Digest.new(digest)

Once you integrate plug-in with Mozilla and restart econe service on FrontEnd machine, go to Mozilla browser and add your region

Here, AWS Secret Access Key refers to SHA1 password that you can see through oneuser command

Then you will get your EC2 Interface.

This way, you can add more regions with credentials to access other’s cloud. You can also launch virtual machines and other stuff from this interface.

Bharat Bagai
bbagai@gmail.com, bagai_bharat@hotmail.com

Work done by Debasis Roy Choudhuri, Bharat Bagai, Joydipto Banerjee, Udaya Keshavadasu, Rajeev D Samuel, Mitesh Chunara & Krishna Singh at the Business Application Modernization (BAM) Department of IBM India.

In this blog we try to highlight some of the key elements involved in building a private Cloud environment using OpenNebula.

Scope

The goal was to setup a Platform-as-a-Service (PaaS) sandbox environment, where our practitioners can get a hands-on practice on various open Source based tools and technologies. We were successful in creating an On-Demand model where Linux based images having required software (e.g. MySQL, Java or any configurable middleware) could be provisioned using the OpenNebula web based interface (Sunstone) along with email notification to the users.

The highlight of the entire exercise was using nested Hypervisors to setup OpenNebula cloud – a feature which was probably being tried out for the first time (we checked the public domain and OpenNebula forums where nobody was sure if such a scenario existed; and were not certain if it was feasible ).

Implementation

We started with OpenNebula version 2.2 and then later on upgraded to 3.0. Hypervisor used was VMware ESX 4.1 and Centos 5.5 & 6.0 OS was used for the provisioned images. The hardware employed was IBM System x 3650 M2 for Cloud Management environment and administration while IBM SAN storage for provision of images.

Here’s the architecture diagram –

We have configured the above scenario in a single ESXi box (Physical Server).

As you can see in the architecture diagram, we configured OpenNebula(Front end) to use the VMware hypervisor(ESXi VM) to host VMs, the VSphere client on a Windows VM to access the ESXi(for admin work). One VM was designated as Image repository where all client images were stored. We also configured NFS on Image repository VM.

Note: – Before starting the installation, you have to install EPEL (Extra Packages for Enterprise Linux). EPEL contains high quality add-on packages for Centos and other Scientific Linux that will be required for compatibility with OpenNebula.

For storage space, we used NFS for OpenNebula and VMware storage. For that we created a separate NFS server or you can use same server. The following is the architect diagram that we used –

In above diagram, we are using SAN storage and mapped it to physical ESXi. After that, storage space has been distributed among VM’s. For Image repository, we took a large chunk of space to make it as NFS server also. This large chunk of space has been shared between ESXi and Front End. Naming of NFS storage space should be same here for all three servers.

One more important point that I have to mention here is that the name of the datastore should be same in both VMware hypervisor and FrontEnd machine as shown below:

A point to note about the VMWare

Remember that your VMware ESX server should not be free version. Either it has to be a limited edition of 60 days or a complete licensed version. Otherwise you will get errors while deploying VM from FrontEnd machine. You may use the following to test VM functionality through command prompt.

/srv/cloud/one/bin/tty_expect -u oneadmin -p password virsh -c ESX:///?no_verify=1

Once the connectivity is established, you can create VM network and deploy VM from FrontEnd machine. I will recommend while creating vmdk file ( which you will later use as an image ), you should install VM tools also.

A point about using context feature:

At present context feature for VMware is not supported by OpenNebula 3.0. This feature has been made available for only KVM and XEN Hypervisors. With the help of context feature, OpenNebula FrontEnd can provide IP address, hostname, DNS Server, gateway, etc to client VM’s. As a workaround for VMware ESXi, we used an alternate method – writing a custom script that emulates OpenNebula’s Context features. This script provides IP address for client, hostname, VMID etc. After assigning these details to the VM, it emails the Cloud admin with the necessary information.

Hints and Tips

Some errors which surfaced during the OpenNebula installation and configuration and their solutions are given below:

  1. During libvirt addon installation for VMware hypervisor, you might get the following error –Configure: error: libcurl >= 7.18.0 is required for the ESX driverSolution: Upgrade curl with latest version or curl 7.21.7# rpm –qa | grep –i curlTo remove curl, use following commands# rpm -e --nodeps –-allmatches curl
    # rpm -e --nodeps –-allmatches curl-devel

    And then configure curl with /usr

    To check curl version, you use use these commands,

    /usr/bin/curl-config –version
    curl -–version

    Now try to install libvirt with ESX , with the following command

    # configure --with-ESX

    Also check that your PKG_CONFIG_PATH refers to “/usr/lib/pkgconfig”.
    To check libvirtd version, you can use these commands

    #/usr/local/bin/virsh -c test:///default list

    or

    # /usr/local/sbin/libvirtd --version

    Also remember that your libvirtd package supports necessary ESX version.

  2. You may also get errors during restart services –Starting libvirtd daemon: libvirtd: /usr/local/lib/libvirt.so.0: version `LIBVIRT_PRIVATE_0.8.2' not found (required by libvirtd)Solution: Uninstall libvirtd and then configure libvirtd again with libraries path. Command – #./configure --with-ESX PKG_CONFIG_PATH="/usr/lib/pkgconfig" --prefix=/usr --libdir=/usr/lib64

Challenges Faced

The team faced several challenges during the journey. Some of the interesting ones are highlighted as follows:

  • Minimize the infrastructure cost on Cloud physical servers. Workaround: Usage of VMs for cloud components like OpenNebula Front End, Image Repository and Host; usage of VLAN with private IP addresses
  • Minimize the cost of provisioning public IP addresses in IBM corporate network for the Cloud infrastructure and the VMs. Workaround: Deployment of dynamic host configuration in the cloud environment with a range of private IP addresses
  • Minimize VMware Hypervisor licensing costs. Workaround: Resolved this issue by building a VM with VMware vSphere ESXi hypervisor on parent ESXi hypervisor (nested Hypervisor scenario).
  • Configuring a GUI for OpenNebula administration tasks. Workaround: Installing the Sunstone as an add-on product for provisioning of image, creation of VM’s etc & making it compatible with VMware Hypervisor
  • Accessibility of the private VMs in the cloud within IBM network. Workaround: Leveraged SSH features: tunneling and port forwarding
  • Limitations of OpenNebula product in passing the host configuration to VMs. Workaround: Internal routing for the cloud components and the VMs in the cloud and deployment of dynamic host configuration in the cloud environment
  • Dynamic communication to the cloud users/admin on provision/decommission of VMs and host configuration/reconfiguration of the VMs in the cloud. Workaround: Use hooks feature of OpenNebula and shell scripts to embed customized scripts in VM image

Bharat Bagai
bbagai@gmail.com, bagai_bharat@hotmail.com

Written in conjunction with Tino Vazquez, of the OpenNebula Project, and cross-posted on the Puppet blog.

Puppet is used for managing the infrastructure for many IaaS software packages, including Eucalyptus, OpenStack, and OpenNebula. OpenNebula is an IaaS manager which can not only manage a large amount of different virtalization and public cloud platforms, it can also emulate the API’s provided by EC2 and OCCI. It’s great for creating private and public clouds, as well as hybrids of the two.

Puppet Labs (or, more specifically, Ken Barber) has developed a powerful integration between OpenNebula and Puppet. The installation and configuration of OpenNebula be managed with this solution, and a virtualized infrastructure can be provisioned starting from bare metal using only Puppet recipes. The module for Puppet that integrates with OpenNebula can be downloaded from the Forge here: http://forge.puppetlabs.com/puppetlabs/opennebula

Installation and Configuration

The Puppet module contains several classes, types and providers for managing OpenNebula resources and installing OpenNebula.

The class ‘opennebula::controller’ is used for managing the main controller node for OpenNebula and is a simple class. An example usage would be:

class { "opennebula::controller":
  oneadmin_password => "mypassword",
}

This will configure the necessary parts for the main controller node, applying the necessary password for the primary ‘oneadmin’ user.

The class opennebula::node can be applied to nodes that will act as hypervisors. This class configures the necessary package and SSH authorization that is used by the SSH transfer, information and virtualization driver in OpenNebula.

The class itself really needs to know the location of its master, and uses stored configurations for shipping the necessary SSH keys across:

class { "opennebula::node":
  controller => "one1.mydomain.com",
}

The Sunstone GUI can be remotely managed using the ‘opennebula::sunstone’ class. The module also can manage the EC2 gateway using ‘opennebula::econe’.

Managing OpenNebula Resources

OpenNebula has many elements that can be managed on the command line:

  • Hosts
  • Images
  • Virtual Networks
  • Virtual Machines

What’s great about OpenNebula is that the same resources can be managed using their own GUI, namely ‘Sunstone’:

Sunstone

We provide some resource types through the Puppet OpenNebula module that allow managing these elements via Puppet as well. The detailed documentation for each of these is provided in the README file for the module, but let’s talk about one in particular: the onevm resource.

The onevm resource allows you to actually manage a virtual machine as if it was a Puppet resource. An example usage in Puppet would be:

onevm { "db1.vms.cloud.bob.sh":
  memory => "256",
  cpu => 1,
  vcpu => 1,
  os_arch => "x86_64",
  disks => [
    { image => "debian-wheezy-amd64",
      driver => "qcow2",
      target => "vda" }
  ],
  graphics_type => "vnc",
  graphics_listen => "0.0.0.0",
  context => {
    hostname => '$NAME',
    gateway => '$NETWORK[GATEWAY]',
    dns => '$NETWORK[DNS]',
    ip => '$NIC[IP]',
    files => '/var/lib/one/context/init.sh',
    target => "vdb",
  }
}

As you can see, this mirrors all of the options made available via the template when creating virtual machines using the command line or Sunstone GUI in OpenNebula:

options

Using Puppet provides just another capability for managing OpenNebula. Upon creation, the VM will be created just like any other VM and now appear when running ‘onevm list’ or viewing the list of virtual machines in Sunstone:

the list of virtual machines in sunstone

Managing Applications End-to-End

An end-to-end example to demonstrate the capabilities of this integration is the deployment of a sample pastie/pastebin application with redundant web servers:

OG- application architecture

The sample content to build such an infrastructure is located here: http://github.com/kbarber/puppet-onedemo

In this demo content we deploy the IaaS manager OpenNebula, correctly configured and including its dependencies like libvirt. We then use the newly installed virtualization engine to start a virtualized application consisting of web servers behind a load balancer.

Combining OpenNebula and Puppet allows you to achieve a fairly complete end-to-end architecture for rapid deployment within a private cloud infrastructure. The following diagram shows some of the necessary elements in such an end-to-end architecture:

Development Progress

Currently the puppetlabs-opennebula module is OpenNebula 2.2 specific, but we are looking to add OpenNebula 3.0 support once it becomes available in the distributions (such as Debian). If you like the idea of having Puppet manage OpenNebula for installation, configuration or for management we are looking for more code contributors, testers and users.

Bugs can be raised in the Puppet Redmine project for our public modules here.

And the code is available here: https://github.com/puppetlabs/puppetlabs-opennebula

Any help or comments are much appreciated, your feedback will be used to refine the integration and make it more functional. We are confident that this integration adds value to your IaaS and Private Cloud projects, and we hope you enjoy using it as much as we did implementing it.

Additional Resources

  • More information on the Puppet/OpenNebula integration, in slides and video.

OpenNebula 3.2 will be released in a few days. Along with other major features, it will include a new easy-to-use web-based end-user interface: OpenNebula Self-Service. This new GUI will complement the existing GUIs for the operation of the cloud (OpenNebula Sunstone) and for the management of multiple zones and virtual data centers (OpenNebula Zones ).

OpenNebula Self-Service is meant to offer a simplified interface to end-users of  the OpenNebula cloud. Self-Service works on top of OpenNebula’s OCCI server and it allows users to easily create, deploy and manage compute, storage (including upload of images) and network resources in seconds. Its aim is to offer a simplified access to shared infrastructure for non-IT end users.

On top of that, OpenNebula Self-Service will come ready to be re-branded, as it is easily customizable (icons, help texts and logos). Last but not least, it will include internationalization support.

Here are some screenshots of the new graphical user interface: