When running many VMs with persistent images, there is the need to have a shared storage behind OpenNebula hosts, with the purpose of faster recovery in case of host failure. However, SAN are expensive, and an NFS server or NAS can’t provide either performance or fault-tolerance.

A distributed fault-tolerant network filesystem takes easily place in this gap. This alternative provides shared storage without the need of a dedicated storage hardware and fault-tolerance capabilities by replicating your data across different nodes.

I am working at LiberSoft, and we evaluated the usage of two different opensource distributed filesystem, MooseFS and GlusterFS. A third choice could be Ceph, which is currently under heavy development and probably not so production-ready, but it certainly would be a good alternative in the near future.

Our choice fell on MooseFS because of its great expandability (you can add how many disks you want, any size you prefer) and its web monitor where you can easily check the status of your shared storage (replication status or disk errors). So we published on the Ecosystem section a new transfer manager and some basic instructions to get it working together with OpenNebula.

We had promising results during the testing deployment of 4 nodes (Gateway with 2x Xeon X3450, 12GB ram, 2x2TB SATA2 disks) for a private cloud at National Central Library of Florence (Italy), that will grow hence most Windows and Linux servers will get on the cloud in the next few months.

The requirements on this project were to use ordinary and affordable hardware and open-source software to avoid any possible vendor lock-in, with the purpose to lower energy consumption and hardware maintenance costs.

After our successful participation in last year’s Google Summer of Code, the OpenNebula project will once again be applying to be a mentoring organization in Google Summer of Code 2011.

As part of our application, we are compiling a list of possible student projects for this summer. We’d like to encourage members of the OpenNebula community to suggest project ideas and to volunteer to mentor students this summer. If you have an interesting project idea, or would be interested in mentoring an OpenNebula student project this summer, please send a message to our mailing list. Please note that the application deadline is March 11th, so we need to collect all project ideas before then.

We are happy to announce that an OpenNebula project has been created in the openSUSE Build Service. We would like to thank Robert Schweikert, Peter Linnell and Greg Freemyer for their efforts and for taking the time to answer our questions in their mailing list.

If you have the time to test it or want to help them improve it, please send your feedback to the openSUSE Packaging mailing list!

Clustered services are multi-tier services composed of several components/tiers. They can be hosted as a group of interconnected virtual machines in a cloud with specific deployment requirements. The OpenNebula Service Manager is a component that enables end users to deploy and manage the life cycle of their clustered applications. It provides several atomic operations to all components in the application to support the full life cycle management of the application.

The OpenNebula Service Manager adds the following features to the OpenNebula experience:

  • Management of multi-tier services with atomic operations (submit, shutdown, cancel, stop, suspend, resume, delete, list and show)
  • Description of service components (VM) interdependencies through a Service Description Language (SDL)
  • Non-intrusive install (no OpenNebula modification required)
  • Graphical representation of services
  • Multiple options for traversing the dependencies graph
  • New command line interface (oneservice)

Additional information:

This component was developed as a student project by Waheed Iqbal as part of Google Summer of Code 2010. The OpenNebula team will continue to develop this component, building on Waheed’s excellent work over the summer.

Hi everyone!  Welcome to my first post of hopefully many.  My name is John Dewey.  I am a Lead Software Engineer in AT&T’s Cloud Team, and currently maintain releases of the OCA RubyGem along with other gems.  I wanted to announce the availability of the OCCI Client RubyGem.  This gem was built against version 5 of the OGF OCCI API Specification.

For information on how to use the gem, please reference the documentation.

I would also like to thank Josh and Javier for their peer-review and assistance.

We are happy to announce that OpenNebula 2.0.1 will be included in the upcoming release of Ubuntu 11.04 (Natty Narwhal)! In fact, if you are using the Natty repositories, the OpenNebula 2.0.1 packages are already available there.

Although OpenNebula 1.2 has been available in Ubuntu since Jaunty, it hasn’t been updated to newer versions. The Ubuntu package has now been updated to OpenNebula 2.0.1 thanks to Damien Raude-Morvan, who is managing an Alioth project that resulted in an OpenNebula Debian package, which was then merged into Ubuntu.

We would like to thank Damien again for his efforts and also the Ubuntu MOTU’s who have helped us with the merge!

A couple of months ago our friends at Cfengine presented a brief overview of the possibilities of a Cfengine-managed OpenNebula setup at the Large Installation System Administration (LISA) conference in San Jose. The Cfengine team presented how Cfengine may be used on both the physical and virtual sides of an OpenNebula-based cloud. More specifically, they presented how Cfengine can be used to install and configure the physical infrastructure in an OpenNebula cloud, followed by the launch and configuration of generic virtual machine images that will run on top of that OpenNebula infrastructure.

In a recent news article, Cfengine has announced that its Orion Cloud Pack, which was originally conceived to make the use of the Amazon EC2 Cloud simple for users of Cfengine, is also working on OpenNebula cloud instances.

In July 2010 the StratusLab project conducted two surveys to collect requirements for the StratusLab cloud distribution and to understand the existing experience with virtualization/cloud technologies in Europe. Funded through the European Union Seventh Framework Programme (FP7), StratusLab is a two year project aimed to successfully integrate ‘cloud computing’ technologies into ‘grid’ infrastructures.
The survey results are presented in project deliverable D2.1 Review of the Use of Cloud and Virtualization Technologies in Grid Infrastructures. The project conducted two online surveys, one aimed at system administrators and the other at users.  Over two-thirds of sysadmins, and over three-quarters of users surveyed intend to use cloud.  Perhaps more surprisingly, over one third of both groups are already using cloud technologies right now. The most popular public clouds are Amazon Web Services and Google App Engine, while OpenNebula is the most popular open source tool for cloud computing management.
The StratusLab surveys have identified certain trends and requirements for cloud technologies among the Grid community that will be addressed to provide the first full cloud solution for grid and cluster computing.  The StratusLab project aims to produce a toolkit to cloud-enable Grid infrastructures based on OpenNebula. The first version of its cloud computing distribution was released few weeks ago.

I am happy to announce first release of the Python OCA bindings. These bindings wrap OpenNebula’s XML-RPC methods in the Python objects, which allows developers to interact with OpenNebula in a more pythonic way.

The package is available on pypi so if you want to try it just run:

$ easy_install oca

Or download the code from github and install it by running:

$ python setup.py install

Here is an example that shows how you can add new host using Python bindings:

[python]
#!/usr/bin/env python

client = oca.Client(‘user:password’, ‘http:12.12.12.12:2633/RPC2’)
new_host_id = oca.Host.allocate(client, ‘host_name’, ‘im_xen’, ‘vmm_xen’, ‘tm_nfs’)
hostpool = oca.HostPool(client)
hostpool.info()
for i in hostpool:
if i.id == new_host_id:
host = i
break
print host.name, host.str_state
[/python]

For more details how to use Python OCA read the documentation

Try it and share your thoughts, any feedback is welcome.

C12G has created a new howto to explain the way to use OpenNebula with qcow images. Using them has the benefit of occupying less space, faster cloning time and solving problems related with sparse images. It is also a nice example on how OpenNebula behavior can be changed to suit the system administrator or infrastructure needs, in these case the storage model. There are still unexplored qcow feature besides the ones described in the text but it serves as the basis to implement them.