OpenNebula Releases Microsoft Hyper-V Integration Prototype

The OpenNebula project is happy to announce the release of a development version of the new plug-ins to build clouds on Microsoft Hyper-V. This new prototype, first result of its collaboration in cloud computing innovation and interoperability with Microsoft, allows users to build and manage OpenNebula clouds on a Hyper-V based virtualization platform. The new components are available for download under the Apache license as a new OpenNebula ecosystem project. The OpenNebula project provides support for the deployment and tuning of the new drivers through its ecosystem mailing list.

The support for Hyper-V consolidates OpenNebula’s position as a fully open-source interoperable and innovative solution for the complete and comprehensive management of virtualized data centers to enable private, public and hybrid clouds. OpenNebula interoperability makes cloud an evolution by offering common cloud standards and interfaces, leveraging existing IT infrastructure, protecting existing investments, and avoiding vendor lock-in. In order to provide the greater flexibility, the integration supports both variants of Hyper-V, namely in Windows Server 2008 and Windows Server 2008 R2 SP1. Moreover the integration will not require the installation of new services in the cloud nodes, making quite simple and rapid to build an OpenNebula cloud on existing Hyper-V deployments.

OpenNebula would like to thank enterprise cloud provider VrStorm for its help in the evaluation of the new plug-ins.

OneVBox: New VirtualBox driver for OpenNebula

This new contribution to the OpenNebula Ecosystem expands OpenNebula by enabling the use of the well-known hypervisor VirtualBox to create and manage virtual machines.

OneVBox supports the upcoming OpenNebula 3.0 (currently in beta) and VirtualBox 4.0. It is composed of several scripts, mostly written in Ruby, which interpret the XML virtual machine descriptions provided by OpenNebula and perform necessary actions in the VirtualBox node.

OneVBox can deploy but also save, restore and migrate VirtualBox VMs from one physical node to a different one.

Using the new OneVBox driver is very easy and can be done in a few steps:

  1. Download and install the driver. Run from the driver folder:
    user@frontend $> ./install.sh

    Make sure that you have permissions to write in the OpenNebula folders. $ONE_LOCATION can be used to define the self-contained install path, otherwise it will be installed in system-wide mode.

  2. Enable the plugin. Put this in the oned.conf file and start OpenNebula: [shell]
    IM_MAD = [
    name = "im_vbox",
    executable = "one_im_ssh",
    arguments = "-r 0 -t 15 vbox" ]

    VM_MAD = [
    name = "vmm_vbox",
    executable = "one_vmm_exec",
    arguments = "vbox",
    default = "vmm_exec/vmm_exec_vbox.conf",
    type = "xml" ]
    [/shell]

  3. Add a VirtualBox host. For example:
    oneadmin@frontend $> onehost create hostname im_vbox vmm_vbox tm_ssh

    OneVBox also includes ab OpenNebula Sunstone plugin that will enable adding VirtualBox hosts and creating VirtualBox VM templates from the web interface. In order to enable it just add the following lines to etc/sunstone-plugins.yaml:

    [shell]
    – user-plugins/vbox-plugin.js:
    :group:
    :ALL: true
    :user:
    [/shell]

    (Tip: When copy/pasting, avoid using tabs in YAML files, they’re not supported)

For more information, you can visit the OpenNebula Ecosystem page for OneVBox. If you have questions or problems, please let us know on the Ecosystem mailing list or open an issue in the OneVBox github tracker.

Integrating SUSE Studio with OpenNebula

C12G Labs has just released a new guide on integrating SUSE Studio with OpenNebula. This guide addresses how to create or adapt any SUSE Studio appliance by simply adding a 20-line script to the appliance, which will integrate the appliance’s network with OpenNebula and will handle the contextualization process.

It also illustrates further integration steps to handle SUSE Studio url’s directly by OpenNebula. With a few-lines modification to the driver, it can manage the whole download, unpack and register process.

Take a look at the SUSE Studio appliance configuration and see it running on top of OpenNebula:

An additional work all OpenNebula users must face is the process of designing new images. This process usually involves downloading installation media, creating a temporary Virtual Machine for the installation process (usually handled by Opennebula) and performing the actual operating system installation, which includes partitioning the virtual disks, installing the required packages and preparing the virtual machine for OpenNebula. With SUSE Studio’s wonderful and intuitive interface the whole process can be done with a few clicks reducing the time spent in preparing images to a minimum.

Combining SUSE Studio’s powerful API and OpenNebula’s state-of-the-art integration capabilites there’s room for much more improvement in OpenNebula integration with SUSE Studio. An example of this would be adding a new tab in Sunstone’s Image creation form, specific for SUSE Studio, where a user would see a list of SUSE Studio appliances and by simply selecting it, OpenNebula would clone the SUSE Studio appliance, adapt it and register it, making the appliance available for all the OpenNebula users.

Additionally, there are some benefits for Amazon EC2 hybrid users. SUSE Studio can upload the appliances to EC2, so you can easily manage the appliances life-cycle from a single point.

So, checkout the Using SUSE Studio with OpenNebula guide, and start using SUSE Studio appliances with OpenNebula!

OpenNebula Interoperability Working Group

OpenNebula has emerged as a viable open source alternative to commercial cloud management solutions. A number of companies are adopting OpenNebula in their IT infrastructure. Not only that, it is also been actively promoted by the European Commission as the basis for numerous cloud computing related research. There is a real need for interaction and exchange of ideas between such projects and the industry partners alike. The OpenNebula Interoperability Working Group has been established to facilitate such an exchange.

A lot of effort is being spent by various working groups in coming up with standards for cloud platform management. Open Cloud Computing Interface (OCCI), Cloud Data Management Interface (CDMI), and Open Virtualization Format (OVF) are among the major open standards being developed for standardizing cloud management interfaces and cloud applications packaging. A lot of upcoming European projects that plan on using OpenNebula also have plans to support some of these open standards. A major thrust area of this working group will be to ensure maximum interoperability in the interfaces being developed by such projects.

The first interoperability working group teleconference took place on September 07 2011. Attending organizations included INRIA, OW2, UCM, XLab, and TU Dortmund. Other prominent research groups will be invited very soon to join in this interoperability effort. Hopefully, as a result of this working group’s effort, extensibility and re-usability of OpenNebula ecosystem products will be vastly improved.

Recently, we decided to make this group’s mailing list open to public. OpenNebula ecosystem members and users are invited to participate and follow the activities of this working group. The Interoperability mailing list will remain the primary mode of information dissemination. Periodic posts along with interesting how-to, API documentations, project showcases will be made available on the working group’s Wiki page.

Extended ldap_auth module

The current ldap_auth module in OpenNebula assumes that the username is the same as the LDAP dn entry. In more complex LDAP installations this is often not the case and LDAP authentication is a bit more complicated:

  • Bind as a dedicated “search LDAP user”.
  • Search the directory tree for the username.
  • Get the DN from the search result.
  • Bind as the DN with the user password.

I modified the current ldap_auth.rb to use this more complex process if the auth.conf file defines “search_filter” (if undefined it will use the original behavior and is thus backwards compatible). If defined, it expects “search_filter” to contain a suitable search string with “@@LOGIN@@” instead of the user name (to be replaced at runtime). E.g. something like: “(&(cn=@@LOGIN@@)(objectClass=user))”

It also expects the following config entries:

  • sec_principal : the DN of the LDAP search user.
  • sec_passwd: The password for the sec_principal.
  • search_base: The base in the LDAP tree from which to search.

Code below (works with OpenNebula 2.0 and 2.2, but not with 3.0 beta):

[ruby]
# ————————————————————————–
# Copyright 2010, C12G Labs S.L., CSIRO
#
# This file is part of OpenNebula Ldap Authentication.
#
# OpenNebula Ldap Authentication is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or the hope
# That it will be useful, but (at your option) any later version.
#
# OpenNebula Ldap Authentication is distributed in WITHOUT ANY WARRANTY; without even
# the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
# PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OpenNebula Ldap Authentication . If not, see http://www.gnu.org/licenses/
# ————————————————————————–

require ‘rubygems’
require ‘net/ldap’

# Ldap authentication module.

class LdapAuth
def initialize(config)
@config = config
end

def getLdap(user, password)
ldap = Net::LDAP.new
ldap.host = @config[:ldap][:host]
ldap.port = @config[:ldap][:port]
ldap.auth user, password
ldap
end

def getLdapDN(user)
search_filter = @config[:ldap][:search_filter]
if (search_filter.nil?)
return user
end
search_filter = search_filter.gsub("@@LOGIN@@", user)
ldap = getLdap(@config[:ldap][:sec_principal], @config[:ldap][:sec_passwd])
begin
ldap.search( :base => @config[:ldap][:search_base], :attributes => ‘dn’, :filter => search_filter, :return_result => true ) do |entry|
STDERR.puts "Found #{entry.dn}"
return entry.dn
end
rescue Exception => e
STDERR.puts "LDAP search failed: #{e.message}"
end
return nil
end

def auth(user_id, user, password, token)
dn = getLdapDN(user)
if(dn.nil?)
STDERR.puts("User #{user} not found in LDAP")
return false
end
begin
if getLdap(dn, token).bind
STDERR.puts "User #{user} authenticated!"
return true
end
rescue Exception => e
STDERR.puts "User authentication failed for #{entry.dn}: #{e.message}"
return false
end
STDERR.puts "User #{user} could not be authenticated."
return false
end

end
[/ruby]

SVMSched: a tool to enable On-demand SaaS and PaaS on top of OpenNebula

SVMSched [1] [2] is a tool designed to enable on-demand SaaS clouds on virtualized infrastructures, and can also be easily set up to support PaaS clouds. SVMSched can be used to build cloud platforms where a service is deployed to compute a user-given dataset with a predefined application based on a given hardware requirements (CPU, memory). In such a context, SVMSched seamlessly and automatically creates a custom virtual computing environment to run the service on-the-fly. Such a virtual computing environment is built to start the execution of the service at the startup, and is automatically destroyed after the execution, freeing up allocated resources.

Benefits of SVMSched

  • Configuration-based On-demand Cloud Services: A SVMSched cloud is based on a single configuration file in which you define a set of software services that you wish to provide from your virtualized infrastructure. This configuration file also supports parameters to connect to the OpenNebula server, scripts and data necessary to automatically build virtual environments to run services, etc.
  • Automatic provisioning and high-level abstraction of virtual machines: After deploying SVMSched in your cloud infrastructure, you don’t longer need to manipulate virtual machine templates. Actually, to run a service you only need to make a simple request in the form of “I want a virtual machine with 4 CPUs and 512 MB of memory to compute a given set of data with a specific application”. Then, SVMSched does the rest for you (prepare the virtual machine’s image, instantiate the virtual machine, deploy and start the virtual machine on a node it selected seamlessly, start the execution of the service within the virtual machine, shut down the virtual machine when the execution is completed).
  • Scheduling: SVMSched enables advanced scheduling policies such as task prioritization, best-effort along with automatic preemption and resuming (plus migration, where required), resource sharing, etc.
  • Remote Data Repository: SVMSched is designed to allow you to define shared data repositories on the network that can be mounted automatically within the file system of virtual machines at startup, before starting the execution of service. Such a repository can be useful to store binaries, and any other data required by the compute tasks. It thus provides a mechanism to ease the handling of input and output data. Hence, you can avoid handling large virtual machine images (requiring large time of setting up), while minimizing the risk of losing data and computation already done if a virtual machine failed unexpectedly.

Integration Architecture

The figure below shows the architecture for integrating OpenNebula with SVMSched. In brief, SVMSched :

  • Works as a drop-in replacement for the OpenNebula’s default scheduler (mm_sched).
  • Enables a specific socket interface managed by a listening daemon. The socket works over the IP protocol, thereby enabling the possibility to have remote clients.
  • Enables a built-in UNIX-like command line client. The client can be located in a different server than the SVMSched daemon.
  • Communicates with OpenNebula through the XML-RPC interface. SVMSched and OpenNebula can be hosted on different servers.
  • Relies on a single XML configuration file; Not need to manipulate virtual machine templates.

SVMSched Integration Architecture

Use cases

Without being exhaustive, these are some situations where SVMSched can bring you significant added values.

Automatic deployments for on-demand PaaS/SaaS services

Typical contexts are: executing services based on computational applications (data/input => processing => results/output), resource/platform leasing, etc. Software testing (validation testing, non-regression testing, etc.) is a typically example. In such a context, the infrastructure behaves as a dynamic virtual cluster, in which virtual machines are created and deployed on-the-fly for specific and limited lifetimes after which they disappear.  Each virtual machine has a specific/custom configuration (software stack, amount of CPU, memory size). After its lifetime, depending on (determined by) the time required to run the service, the virtual machine is automatically destroyed to free allocated resources. The following points explain the few things you need to set up such a cloud :

  1. Define one or more services in the SVMSched’s configuration file according to your needs. E.g. a service can consist in running a specific unit test script.
  2. If necessary, set up a data repository (a shared network file system) in which binaries and data required to run services will be located. We recall that SVMSched enables to mount this repository automatically into the file system of virtual machines.
  3. Finally, running a service is a straightforward task. For example, the following command allows you to run an instance of the service named “example-service1” using a virtual machine having 2 CPUs and 1024MB of memory. In the example, we assume that the input data is located in /data/repository/file.dat,  specified with the -a option.
  4. $ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
              -r <example-service1> -a /data/repository/file.dat

On-demand Infrastructure for Training

See here for example. In such a situation SVMSched can be especially useful to avoid setting up multiple templates of virtual machines manually, while being able to create virtual machines with various hardware and software configurations.  Indeed, this can be time-consuming. For example, assume that you have to deal with several trainings, each requiring a practical session (e.g. Parallel programming, Web application deployment, etc.). It appears evident that the software and the hardware requirements of virtual machines need for the different practical sessions can vary considerably, and may require to set a lot of virtual machine templates. You may also need that at the end of each practical session (given by a duration), virtual machines be destroyed automatically. Using SVMSched, only four straightforward things are needed to set up such an infrastructure:

  1. Define each practical session as a service in the SVMSched’s configuration file.
  2. For each service, set a data repository in which specific software binaries and libraries and data required for that practical session will be located. We recall that SVMSched enables to mount this repository automatically into the file system of virtual machines.
  3. For the main program (executable), use a simple script that enforces a sleep for a given duration.
  4. Finally, for each student who should attend at a given session you only need to request a virtual machine with specific hardware requirements (memory and CPU), for a given duration. The example below show how to create a virtual machine with 2 CPUs, 1024 MB of memory and a lifetime of 3 hours.  HINT: If all virtual machines need the same the requirements, you can use a loop according to the number of attendees.
  5. $ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
              -r <training-service-id> -a 7200

Co-hosting of production and development services

A typical case is when you want to use idle resources of a production infrastructure to carry out some development tasks such as software testing (init tests, Non-Regression Testing or NRT, etc.). SVMSched allows you to distinguish production tasks (prioritized and non-preemptable) to best-effort tasks (non-prioritized and preemptable).  So, when operating, SVMSched can automatically preempt best-effort jobs when there are not resources available to run queued production tasks. Preempted jobs are automatically resumed as soon as resources become idle. The decisions of preempting and resuming are took autonomously. Assuming that you already set up a SVMSched cloud, the following commands show how to run two jobs in production and best-effort modes, respectively.

$ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
          -r <prod-service-id> -a /data/repository/file1.dat [-t prod]
$ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
          -r <nrt-service-id> -a /data/repository/file2.dat -t beff

Conclusion

SVMSched (Smart Virtual Machine Scheduler) is a tool designed to enable and ease the set-up of on-demand SaaS and PaaS services on top of OpenNebula. SVMSched is open source and available for free downloading [1]. However, SVMSched is still at a development stage, not yet production-ready.  Being an ongoing project, feedbacks and collaborations are appreciated. So, don’t hesitate to contact authors if you have questions, suggestions, comments, etc.

References

[1] SVMSched Home. https://gforge.inria.fr/projects/svmsched/

[2] Rodrigue Chakode, Blaise-Omer Yenke, Jean-Francois Mehaut. Resource Management of Virtual Infrastructure for On-demand SaaS Services. In CLOSER2011: Proceedings of the 1st International conference on Cloud Computing and Service Science. Pages 352-361. Noordwijkerhout, Netherlands, May 2011.

Image Creation and Contextualization Guide

C12G has created an Image Contextualization Guide to give guidance on how to create and configure a VM Image to work in the OpenNebula environment. The new guide proposes techniques to create a VM Image from scratch and to prepare existing images to run with OpenNebula.

This article is part of the new Knowledge Base that is being extended by C12G Labs.

OpenNebula Scalability Guide

C12G has created a Scalability Guide to give guidance on how to install and tune OpenNebula for optimal and scalable performance in your environment. The software comes with several modifiable parameters that can to be adapted to the specific needs of your infrastructure and workload.

This article is part of the new Knowledge Base that is being extended by C12G Labs.

OCCI 1.1 for OpenNebula

A recommendation for version 1.1 of the Open Cloud Computing Interface (OCCI) was recently released by the Open Grid Forum (OGF) (see OGF183 and OGF184). To add OCCI 1.1 support for OpenNebula, we created the Ecosystem project “OCCI for OpenNebula”. The goal of the project is to develop a complete, robust and interoperable implementation of OCCI 1.1 for OpenNebula.

Although the project is still in an early stage, today we released a first version that supports creating and deleting Virtual Networks, Images and Machines. Information on installation and configuration of the OCCI 1.1 extension can be found in the Wiki of the project.

Florian Feldhaus, Piotr Kasprzak – TU Dortmund

Integrating Public Clouds with OpenNebula for Cloudbursting

C12G has created an introductory article to describe how to integrate public clouds with OpenNebula for Cloudbursting. The white paper describes the integration of public clouds with private cloud instances running OpenNebula. A general provisioning scenario that combines local and external cloud resources is first described. Afterwards the architecture of OpenNebula and the main components involved in a hybrid cloud setting are briefly presented. The document ends with some considerations and the minimum requirements to deploy a service in an hybrid cloud.

This article is part of the new Knowledge Base that is being extended by C12G Labs.