An OpenNebula user, John Dewey, has just contributed a new oca rubygem that allows developers to make calls to OpenNebula’s Cloud API (OCA) from their Ruby projects without having to install OpenNebula. The Ruby OCA API has been designed as a wrapper for the XML-RPC methods to interact with the OpenNebula Core, with some basic helpers. This gem was built against OpenNebula 2.0 and will be updated in each release.

If you want to use it in your Ruby projects, you can install it by running the following:

$ sudo gem install oca

Here is a short example that shows how you can use this new oca gem from Ruby. More specifically, this program queries all the running VMs and shuts them down.

[ruby]
#!/usr/bin/env ruby

###################################################################
# Required libraries
###################################################################
require ‘rubygems’
require ‘oca’

include OpenNebula

# OpenNebula credentials
CREDENTIALS = "oneuser:onepass"
# XML_RPC endpoint where OpenNebula is listening
ENDPOINT = "http://localhost:2633/RPC2"

client = Client.new(CREDENTIALS, ENDPOINT)

vm_pool = VirtualMachinePool.new(client, -1)

rc = vm_pool.info
if OpenNebula.is_error?(rc)
puts rc.message
exit -1
end

vm_pool.each do |vm|
rc = vm.shutdown
if OpenNebula.is_error?(rc)
puts "Virtual Machine #{vm.id}: #{rc.message}"
else
puts "Virtual Machine #{vm.id}: Shutting down"
end
end

exit 0
[/ruby]

We would like to thank John Dewey for this very useful contribution!

The 2nd IEEE International Conference on Cloud Computing Technology and Science will take place in Indianapolis, USA, from November 30th to December 3rd. Part of the OpenNebula team will attend to the conference to give a short 90-minute tutorial on Thursday, December 2nd, from 11:30am to 1:00pm.

This tutorial will present the exciting new features in OpenNebula 2.0, and will cover the following:

  1. Introduction to OpenNebula
  2. Features: What can you do with OpenNebula?
  3. Private cloud management
  4. Public cloud interfaces

Since this is just a 90 minute tutorial, we won’t be able to do much hands-on exercises. Nonetheless, we will be available all day for questions, hands-on tinkering, meetings, etc. If you’d like to meet with us, feel free to approach us that day or, better yet, contact our Community Manager beforehand to arrange for a specific meeting time.

The StratusLab project has just released the first version of its cloud computing distribution, which aims to provide a full cloud solution for grid and cluster computing. The StratusLab distribution, which includes OpenNebula as the core virtual machine manager and cloud management tool, is being tested on research grid infrastructures, which are composed of dozens of sites and comprises tens of thousands of physical hosts. The first version of the StratusLab distribution is a technology preview, and not yet production-ready, but it will give system administrators and users a chance to try out the new features of what will become an integrated solution for cloud management, running grid services within the cloud, and accessing cloud resources and services from the Grid.

Funded through the European Union Seventh Framework Programme (FP7), the two-year StratusLab project aims to integrate ‘cloud computing’ technologies into ‘grid’ infrastructures. Grids link computers and data that are scattered across the globe to work together for common goals, whilst cloud computing makes software platforms or virtual servers available as a service over the Internet, usually on a commercial basis, and provides a way for organisations to access computing capacity without investing directly in new infrastructure. Linking grid and cloud technologies will result in major benefits for European academic research and is part of the European Commission strategy to develop European computing infrastructures.

Visit http://www.stratuslab.eu/doku.php?id=release:v0.1 for more information or to download the StratusLab distribution.

After the announcement of the delivery of its development version one month ago, C12G has just announced that the stable version of the software extensions distributed in the Enterprise Edition of OpenNebula have been contributed to the OpenNebula Project. These extensions were created to support customers and partners and to enhance the functionality and performance of OpenNebula in enterprise-class and very-large-scale systems. The contributed components are:

  • Enhanced VMware Adaptor that enables the management of an OpenNebula cloud based on VMware ESX, vCenter and/or VMware Server hypervisors
  • LDAP Authentication Module that permits permits users to have the same credentials as in LDAP, so effectively centralizing authentication
  • Accounting Toolset that visualizes and reports resource usage data, and allows their integration with chargeback and billing platforms
  • OpenNebula Express that eases the installation and deployment of OpenNebula clouds

The upcoming 2.0 version of OpenNebula Enterprise will include the most recent thoroughly tested and quality-controlled version of OpenNebula with the patches available, selected stable and tested software extensions from the add-on and the ecosystem catalogs, and extended documentation. OpenNebula Enterprise brings additional benefits of long term professional, integration and certification support services, and regular updates and upgrades.

The OpenNebula Project endorses these extensions and supports them through the user mailing list. Moreover, the project ensures its full compatibility with current and upcoming releases of OpenNebula. This news confirms OpenNebula as fully open source cloud software, not being a feature or performance limited edition of an Enterprise version. C12G Labs contributes to the sustainability of OpenNebula and is committed to enlarge its community. C12G Labs dedicates an amount of its own engineering resources to support and develop OpenNebula and so to maintain OpenNebula’s position as the leading and most flexible and innovative open-source technology for cloud computing.

If you’re a system administrator, you’ve probably already heard of Cfengine, a cross-platform datacenter automation framework used by more than 5,000 companies on millions of machines worldwide. With Cfengine, the sysadmin describes the desired system state and Cfengine takes care of the rest: it will install packages, maintain configuration files, keep permissions and ensure the right processes are running according to your policy.

The Cfengine team has been investigating how Cfengine may be used on both the physical and virtual sides of an OpenNebula-based cloud. More specifically, we have been looking into how Cfengine can be used to install and configure the physical infrastructure in an OpenNebula cloud, followed by the launch and configuration of generic virtual machine images that will run on top of that OpenNebula infrastructure.

This week, at the Large Installation System Administration (LISA) conference in San Jose, we will give a brief overview of the possibilities of a Cfengine-managed OpenNebula setup. If you can’t make it to our talk, during the Cfengine BoF (Tuesday, November 9th, 7pm-8pm), you can check out the slides from our presentation here.

Cfengine team

The Supercomputing Center of Galicia (CESGA) and the Supercomputing Center Foundation of Castilla y León (FCSCL) have built a federation of cloud infrastructures using the hybrid cloud computing functionality provided by OpenNebula. Both organizations have collaborated in order to execute an application to fight Malaria across both sites. This is a very interesting use case of cloud federation in the High Performance Computing field.

Last week at ISC Cloud 2010, Ulrich Schwickerath, from the CERN IT-PES/PS Group, presented the last benchmarking results of CERN’s OpenNebula cloud for batch processing. The batch computing farm makes a critical part of the CERN data centre. By making use of the new IaaS cloud, both the virtual machine provisioning system and the batch application itself have been tested extensively at large scale. The results show OpenNebula managing 16,000 virtual machines to support a virtualized computing cluster that executes 400,000 jobs.