C12G has created an introductory article to describe how to integrate public clouds with OpenNebula for Cloudbursting. The white paper describes the integration of public clouds with private cloud instances running OpenNebula. A general provisioning scenario that combines local and external cloud resources is first described. Afterwards the architecture of OpenNebula and the main components involved in a hybrid cloud setting are briefly presented. The document ends with some considerations and the minimum requirements to deploy a service in an hybrid cloud.
The OpenNebula Team is happy to announce the new OpenNebula IRC Sessions. In these sessions the OpenNebula developers will be available for questions in the #opennebula IRC channel on irc.freenode.net. The developers will answer questions about the new features or development and configuration issues that cannot be found in the mailing list archive.
These sessions will usually be scheduled in the first week of each month.
First session: Monday, 9 May 2011, 15:00 UTC
The attendees will need an IRC client connected to irc.freenode.net and the #opennebula channel
C12G has created a new article to describe how to extend the OpenNebula monitoring system. OpenNebula needs to monitor the physical resources known to the system in order to extract information that in turn is used by the scheduler to enforce (and comply with) placement policies, keeping the host capacity from being overbooked.
The Monitoring System in OpenNebula follows the design guidelines present in the rest of its architecture, including the modularity present in other components. This modularity is expressed in this case in a plugin approach, where the information is extracted using ‘probes’, basically simple scripts that return pieces of wanted information.
This new howto shows how to extend the Monitoring System to extract information about disk space availability, and then use it in the Virtual Machine templates to ensure the presence of enough disk space for the image in the chosen host to run that Virtual Machine.
When running many VMs with persistent images, there is the need to have a shared storage behind OpenNebula hosts, with the purpose of faster recovery in case of host failure. However, SAN are expensive, and an NFS server or NAS can’t provide either performance or fault-tolerance.
A distributed fault-tolerant network filesystem takes easily place in this gap. This alternative provides shared storage without the need of a dedicated storage hardware and fault-tolerance capabilities by replicating your data across different nodes.
I am working at LiberSoft, and we evaluated the usage of two different opensource distributed filesystem, MooseFS and GlusterFS. A third choice could be Ceph, which is currently under heavy development and probably not so production-ready, but it certainly would be a good alternative in the near future.
Our choice fell on MooseFS because of its great expandability (you can add how many disks you want, any size you prefer) and its web monitor where you can easily check the status of your shared storage (replication status or disk errors). So we published on the Ecosystem section a new transfer manager and some basic instructions to get it working together with OpenNebula.
We had promising results during the testing deployment of 4 nodes (Gateway with 2x Xeon X3450, 12GB ram, 2x2TB SATA2 disks) for a private cloud at National Central Library of Florence (Italy), that will grow hence most Windows and Linux servers will get on the cloud in the next few months.
The requirements on this project were to use ordinary and affordable hardware and open-source software to avoid any possible vendor lock-in, with the purpose to lower energy consumption and hardware maintenance costs.
As part of our application, we are compiling a list of possible student projects for this summer. We’d like to encourage members of the OpenNebula community to suggest project ideas and to volunteer to mentor students this summer. If you have an interesting project idea, or would be interested in mentoring an OpenNebula student project this summer, please send a message to our mailing list. Please note that the application deadline is March 11th, so we need to collect all project ideas before then.
We are happy to announce that an OpenNebula project has been created in the openSUSE Build Service. We would like to thank Robert Schweikert, Peter Linnell and Greg Freemyer for their efforts and for taking the time to answer our questions in their mailing list.
If you have the time to test it or want to help them improve it, please send your feedback to the openSUSE Packaging mailing list!
Clustered services are multi-tier services composed of several components/tiers. They can be hosted as a group of interconnected virtual machines in a cloud with specific deployment requirements. The OpenNebula Service Manager is a component that enables end users to deploy and manage the life cycle of their clustered applications. It provides several atomic operations to all components in the application to support the full life cycle management of the application.
The OpenNebula Service Manager adds the following features to the OpenNebula experience:
- Management of multi-tier services with atomic operations (submit, shutdown, cancel, stop, suspend, resume, delete, list and show)
- Description of service components (VM) interdependencies through a Service Description Language (SDL)
- Non-intrusive install (no OpenNebula modification required)
- Graphical representation of services
- Multiple options for traversing the dependencies graph
- New command line interface (oneservice)
- Documentation on installation, configuration and usage.
- Case study: Deployment of a multitier web app
- OpenNebula Service Management Tool Testing
This component was developed as a student project by Waheed Iqbal as part of Google Summer of Code 2010. The OpenNebula team will continue to develop this component, building on Waheed’s excellent work over the summer.
Hi everyone! Welcome to my first post of hopefully many. My name is John Dewey. I am a Lead Software Engineer in AT&T’s Cloud Team, and currently maintain releases of the OCA RubyGem along with other gems. I wanted to announce the availability of the OCCI Client RubyGem. This gem was built against version 5 of the OGF OCCI API Specification.
For information on how to use the gem, please reference the documentation.
We are happy to announce that OpenNebula 2.0.1 will be included in the upcoming release of Ubuntu 11.04 (Natty Narwhal)! In fact, if you are using the Natty repositories, the OpenNebula 2.0.1 packages are already available there.
Although OpenNebula 1.2 has been available in Ubuntu since Jaunty, it hasn’t been updated to newer versions. The Ubuntu package has now been updated to OpenNebula 2.0.1 thanks to Damien Raude-Morvan, who is managing an Alioth project that resulted in an OpenNebula Debian package, which was then merged into Ubuntu.
We would like to thank Damien again for his efforts and also the Ubuntu MOTU’s who have helped us with the merge!
A couple of months ago our friends at Cfengine presented a brief overview of the possibilities of a Cfengine-managed OpenNebula setup at the Large Installation System Administration (LISA) conference in San Jose. The Cfengine team presented how Cfengine may be used on both the physical and virtual sides of an OpenNebula-based cloud. More specifically, they presented how Cfengine can be used to install and configure the physical infrastructure in an OpenNebula cloud, followed by the launch and configuration of generic virtual machine images that will run on top of that OpenNebula infrastructure.
In a recent news article, Cfengine has announced that its Orion Cloud Pack, which was originally conceived to make the use of the Amazon EC2 Cloud simple for users of Cfengine, is also working on OpenNebula cloud instances.