C12G Labs has just announced the availability of a new version of the OpenNebula Marketplace, a catalog of third party virtual appliances ready to run in OpenNebula environments.

The new Marketplace is fully integrated with the new OpenNebula 3.6 so any user of an OpenNebula cloud can find and deploy virtual appliances in a single click. The OpenNebula marketplace is also of interest to software developer looking to quickly distribute a new appliance, making it available to all OpenNebula deployments worldwide.

The Marketplace is not an online store but a service to bring together cloud users interested in pre-built software solutions and software developers and vendors interested in distributing and promoting their applications and services.The OpenNebula Marketplace is available at no charge to all OpenNebula users and appliance developers.

Enjoy this issue, and as always, we welcome your feedback!.

As OpenNebula 3.6 comes closer (scheduled for July 9th), we would like to launch a call for translations of our web-based user interfaces: Sunstone and Self-Service.

The existing translations can be updated and new translations submitted through our project site at Transifex (https://www.transifex.com/projects/p/one/). The process is very simple:

  • Log in to Transifex. You can easily register or login with using your Twitter, LinkedIn, Google or Facebook accounts
  • Click on the language you want to translate and the resource (Sunstone or Self-Service)
  • Hit “Translate now” and start translating right away through the Transifex website

If the language is not available, you can request the creation of a new language for that resource. If you have a translation ready outside transifex, please upload it to Transifex.

Transifex offers advanced features for translators. You can check their documentation for more information.

Translations reaching a good level of completion will be  included in the official release of OpenNebula. Deadline for translations is 6th of July 09:00am CEST.

Thanks for your collaboration!

We already posted the main features of OpenNebula 3.6 Beta, which are of course fully supported by Sunstone. But we have been working also on the Sunstone visual appeal, and we will keep polishing the interface for the final release. Here are some preview screenshots:

New Dashboard

Similar dashboard for Clusters

Marketplace Integration

Host Monitorization Graphs

User Quota Management

You can request an account in our Demo Cloud and enjoy them, stay tuned for the final release!

The OpenNebula Cloud offers a virtual computing environment accessible through two different remote cloud interfaces, OCCI and EC2, and two different web interfaces: Sunstone for cloud administrators, and SelfService for cloud consumers. These interfaces access the same infrastructure, i.e. resources created by any of the mentioned methods will be instantly available on the others. For instance, you can create a VM with the OCCI interface, monitor it with the EC2 interface, and shut it down using the OpenNebula Sunstone web interface.

This Cloud has been migrated to the latest OpenNebula 3.6 Beta. If you have an account you can still use your old username and password. If not, request a new account and check out the upcoming OpenNebula 3.6 features. These interfaces show you a limited view of the Cloud, as you will not be able to manage certain system resources such as ACL rules, groups or users; nor infrastructure resources such as hosts, clusters and datastores.

Keep in mind that this is a demo cloud, and the operations you can perform will result on virtual networks and machines resource creations, but no real action whatsoever will be performed. This means that there will be the illusion that a VM is created, but in fact it won’t be running anywhere.

During the summer, OpenNebula is participating in several upcoming events in cloud computing:

Would be great to meet you at these venues. If you want to connect, please send an email to contact@opennebula.org

The OpenNebula project is proud to announce the availability of the beta release of OpenNebula 3.6 (Lagoon).

OpenNebula 3.6 features a new hotplug mechanism for disk volumes that supports attaching either volatile volumes or existing images to a running VM. Also for OpenNebula 3.6 we have re-written from scratch the Quota and Accounting tools, so now they are included in the OpenNebula core to enhance their integration with the existing AuthZ & AuthN mechanisms and other related tools (e.g. Sunstone). There are some other new features like VM rescheduling, hard reboots, cloning of disk images…

OpenNebula 3.6 also features improvements in other systems, especially in Sunstone’s interface with the redesign of several tabs as well as in the OpenNebula Zones where we got rid of the datamapper dependency to ease the packaging of OpenNebula.

Last but not least, OpenNebula 3.6 is fully integrated with the new OpenNebula Marketplace. Any user of an OpenNebula cloud can very easily find and deploy virtual appliances through familiar tools like the SunStone GUI or the OpenNebula CLI. The OpenNebula Marketplace is also of the interest of any software developer to quickly distribute a new appliance, making it available to all OpenNebula deployments worldwide.

With this beta release, OpenNebula Lagoon enters feature freeze and we’ll concentrate on fixing bugs and smoothing some rough edges. This release is aimed at testers and developers to try the new features.

As usual OpenNebula releases are named after a Nebula. The Lagoon Nebula (also known as M8, or NGC 6523) is a giant interstellar cloud in the constellation Sagittarius.

Thanks the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

Relevant Links

FutureGrid (FG) is a testbed providing users with grid, cloud, and high performance computing infrastructures. FG employs both virtualized and non-virtualized infrastructures. Within the FG project, we offer different IaaS frameworks as well as high performance computing infrastructures by allowing users to explore them as part of the FG testbed.

To ease the use of these infrastructures, as part of performance experiments, we have designed an image management framework, which allows us to create user defined software stacks based on abstract image management and uniform image registration. Consequently, users can create their own customized environments very easily. The complex processes of the underlying infrastructures are managed by our software tools and services. These software tools are not only able to manage images for IaaS frameworks, but they also allow the registration and deployment of images onto bare-metal by the user. This level of functionality is typically not offered in a HPC (high performance computing) infrastructure. Therefore, our approach changes the paradigm of administrator-controlled dynamic provisioning to user-controlled dynamic provisioning, which we also call raining. Thus, users obtain access to a testbed with the ability to manage state-of-the-art software stacks that would otherwise not be supported in typical compute centers. Security is also considered by vetting images before they are registered in a infrastructure. Figure 1 shows the architecture of the image management framework.

Figure 1. Image Management architecture

This framework defines the full life cycle of the images in FutureGrid. It involves the process of creating, customizing, storing, sharing, and registering images for different FG environments. To this end, we have several components to support the different tasks involved. First, we have an Image Generation tool that creates and customizes images according to user requirements (see Figure 2-a). The second component is the Image Repository, which is in charge of storing, cataloging and sharing images. The last component is an Image Registration tool, which prepares, uploads and registers images for specific environments, like HPC or different cloud frameworks (see Figure 2-b). It also decides if an image is secure enough to be registered or if it needs additional security tests.

Figure 2. Image Generation and Image Registration flow charts.

Within this framework, OpenNebula plays an essential role supporting the image creation process. As we can see in Figure 2-a, the image generation component is able to create images from scratch or by cloning images from our image repository. In case we generate an image from scratch, the image is created using the tools to bootstrap images provided by the different OSes, such as yum for CentOS and deboostrap for Ubuntu. To deal with different OSes and architectures, we use cloud technologies. Consequently, an image is created with all the user’s specified packages inside a VM instantiated on-demand by OpenNebula. Therefore, multiple users can create multiple images for different operating systems concurrently; obviously, this approach provides us with great flexibility, architecture independence, and high scalability.

More information in:

  • J. Diaz, G.v. Laszewski, F. Wang, and G. Fox. “Abstract Image Management and Universal Image Registration for Cloud and HPC Infrastructures”, IEEE Cloud 2012, Honolulu, Hawaii, June 2012.
  • FutureGrid Rain Software Documentation. http://futuregrid.github.com/rain
  • FutureGrid Portal. https://portal.futuregrid.org/



There are already several well established grid and cloud infrastructures in Europe. The next issue is how to exploit these infrastructures, how to port and develop applications for these infrastructures and how to extend their user communities. The main goal of this summer school is to give answers for these questions and to promote best practice examples for potential application developers and users of e-science infrastructures. The summer school is co-organised by three current European Union projects, SCI-BUS, SHIWA and EDGI, to show various aspects of cloud and grid computing.

A whole day is devoted to cloud computing where students are trained on how to build and use cloud systems. This day is based on OpenNebula technology because of its widespread European use. In the morning, cloud systems in general, and OpenNebula will be introduced to the attendees. In the afternoon, during a hands-on session they will have a chance to use a cloud system, based on the latest release of OpenNebula.

We recommend the summer school for PhD students and technical staff members who are interested in learning the grid middleware, desktop grid and cloud technologies and also staff members of companies who would like to establish company level DCI (based on clouds, grids or desktop grids) and utilize this DCI or other commercial clouds via a high-level gateway service.

For detailed information, please visit this webpage: http://www.lpds.sztaki.hu/summerschool2012/

Registration deadline is 28th June, 2012.

One of the main design principles in OpenNebula focuses on enabling large scale deployments. In this type of deployment it is usually the case that we have to deal with large number of physical hosts, with the intention of running a large number of virtual machines. This is important since many of the OpenNebula users run large scale deployments with tens of thousands of virtual machines.

The scalability of the virtual infrastructure manager is, without a doubt, a keystone when a large-scale cloud deployment is at stake. The ability to handle a large number of resources, keeping track of them and staying responsive is essential, for that very reason the OpenNebula project has put a lot of effort to make the central component of OpenNebula, the core daemon, as stable and robust a possible. This was largely possible thanks to the invaluable  feedback provided by the community, so kudos to you! But aside from the scalability, there are many other aspects in which OpenNebula can contribute, with features specifically though to handle a large number of resources:

  • Clusters. These logical entities can be defined as pools of physical hosts that share datastores and virtual networks. Clusters are used for load balancing, high availability, and high performance computing. The idea is to group a set of physical hosts which are homogeneous enough to be able to pull images from the same server (ie, they share a datastore), and to use the same virtual networks, that is, they have the same physical network configuration, being that they share the same bridging configuration or that they have access to the same Open vSwitch for instance. Its benefits with respect to large scale deployments include the ability to deliver a particular virtual machine to the right hardware, which can get tricky when the number of physical resources increase (eg “I want this VM to run over a host with the best network connection available), and the possibility to load balance I/O operations across several datastores.
  • Virtual Data Centers. Fully-isolated virtual infrastructure environments where a group of users, under the control of the Virtual Data Center (VDC) administrator, can create and manage compute, storage and networking capacity. The VDC administrator can create new users inside the VDC. Both admins and users access the VDC through a reverse proxy, so they don’t need to know the endpoint of the OpenNebula cloud, but rather the address of the oZones server and the VDC where they belong to. This feature can be used in large scale deployments to achieve multi-tenancy, effectively partitioning a large cloud in smaller parts ready to be delivered to different groups or organizations.
  • Hybrid Cloud. This extension of a private cloud allows the combination of local resources with resources from remote Cloud providers, done transparently through OpenNebula. The remote provider could be a commercial Cloud service, such as Amazon EC2, or a partner infrastructure running a different OpenNebula instance. Such support for cloudbursting enables highly scalable hosting environments, since the peak demands that cannot be satisfied by local resources are outsourced to external providers.
  • OpenNebula Zones. A Zone can be seen essentially as an OpenNebula instance, that is, a group of interconnected physical hosts with hypervisors controlled by OpenNebula. A Zone can be added to the oZones server, which provides a centralized way to manage multiple OpenNebula deployments. In this way, the oZones server presents a list of aggregated resources, allowing for a loose federation of several Clouds, adding an order of magnitude in the scale of Cloud infrastructures that can be managed with OpenNebula technology.

We’ve recently released a Sandbox Appliance for VMware and now is the turn to do the same with KVM. The appliance image is already in the OpenNebula marketplace but hold your horses and don’t download it just yet. Head to the documentation and download the script that will be used to configure your machines, download the image and start it up.

The requirements now are at least a couple of machines with Ubuntu Server 12.04 64 bit. The CPU should support hardware virtualization as we are using KVM here. It is based on the VMware appliance so you’ll be able to develop in it if you need as all the libraries needed for compilation are already there.

Use the OpenNebula user’s mailing list if you have any problem or question with it.

Happy play time in the sandbox!