OpenNebula Conf 2017 US: Agenda Available



The OpenNebula Project is proud to announce the first agenda and line-up of speakers for the fifth OpenNebula Conference to be held in Cambridge, MA from the 19 to the 20 of June 2017. Guided by your feedback from previous edition, we included more educational and community sessions to learn and do networking.


The agenda includes four keynote speakers:

Educational Sessions 

This year we will have two pre-conference tutorials:

We have also increased the number of educational contents with presentations from the OpenNebula team showing and demoing some of the most demanded features and latest integrations.

Community Sessions

We had a big response to the call for presentations. Thanks for submitting a talk proposal!. Although all submissions were of very high quality and merit, because this year we increased the educational contents we only have space for a few community presentations. Jordi Guijarro from CSUC,  Roy Keene from Knight Point and Hayley Swimelar from LINBIT will discuss their experiences and integrations with OpenNebula.

We will also have two Meet the Experts sessions providing an informal atmosphere where delegates can interact with experts who will give their undivided attention for knowledge, insight and networking; and a session for 5-minute lightning talksIf you would like to talk in these sessions, please contact us!

Besides its amazing talks, there are multiple goodies packed with the OpenNebulaConf registration. You are still in time for getting a good price deal for tickets. There is a 20% discount until May 12nd.


We are looking forward to welcoming you personally in Boston!


Hybrid Clouds: Dancing with Virtual Machines

Today, the model of hybrid cloud comes under the eyes of a lot of organizations. The idea of combining resources of public cloud providers with private depending on the terms of execution, the need of more resources, an extra protection of the data, more or less security in services with sensitive information, etc. are some of the capabilities that this model has to answer. The case of CSUC, acting as Opennebula powered cloud provider and at the same time as user of external IaaS and PaaS services, is analyzed sharing its first experiences in the way to archive a real multicloud architecture.


Figure 1- The Hybrid Cloud, acts as a slicing bridge.

The majority of organizations have not been born in the “cloud”, situation that produces a lot of cases where the cloud resources have to interact or to be connected to traditional systems and applications with some criticality and usually are situated in its own dependencies. This kind of architecture is the most usual, where the main keys for its success go through to be aware about things like the integration capabilities or the impact in the organization roles where could exist different interpretation of the same concept.

For a solutions designer the model has to offer flexibility, speed and capacity. In the case of the infrastructure team it’s easy to think that more “different things” have to be managed. The business development staff will ask if it will be more expensive and which is the value that the hybrid model will contribute.

From the IT infrastructure perspective, when a public cloud provider is selected/procured, a good approach to start with in the adoption of hybrid cloud model is to focus in these first three challenges.

  • Networking: to extend your network layers with capable “dedicated” circuits provision with bandwidth control outside and inside public cloud provider(s).
  • Management: to consolidate a truly global infrastructure management platform through a cloud orchestrator and its cloud bursting capabilities.
  • Security: to extend to public cloud provider current security layers (platforms, tools, policies,…) under the current organizations umbrella.

Definitely, the model or architecture to implement will be different depending on each case where the applications/services have a lot to say and have to be listened. The pure reality is that nowadays almost all the organizations work in two different speeds and this is something that the technology departments have to work with. Some very changing environments like web and mobile applications and other more robust systems associated to critical processes with high levels of stability and security. A real cloud hybrid model should support the combination of this two speed ecosystems maintaining its specific dependences and needs.

The CSUC use case: driving to a hybrid cloud architecture

CSUC, in these last two years has worked in these challenges with some actions that are transforming roles, the management models at ICT infrastructure level, the way of services provision, etc.

The first action was to start a procurement process to provision the first IaaS provider with the idea of infrastructure and services optimization in a new model of payment by use. In parallel as RREN, the adoption of a orchestrator role with its own cloud management platform (Opennebula based) to manage distributed resources in a real multicloud environment. The network design, the unified management and security conditioned the strategy.

Regarding the new external cloud provider and after an a intense procurement process, the company Nexica  resulted adjudicator for four years of the provision of IaaS services for Catalan Universities and CSUC.

Nowadays, the catalan institutions who are using this shared service are:

What work has already been done:

  • First IaaS provider procurement process.
  • Service catalog, service-level agreements and governance.
    Architecture redefinition.
  • RREN extended to the provider: Layers 2&3.
  • A new Opennebula cloud bursting driver was developed (VCLOUD compliance)
  • Integration of CSUC opencloud orchestrator.
  • Deployment of first production services.

If you are interested to see more details, please contact with us.  This initiative will be presented in the next TNC17 , the annual Géant congress, at Linz – Austria [31/5/2017] (see the full paper here).

TechDay Prague 2017 Wrap Up

This little post is here only to thank the people from CESNET for organizing the event. I also want to thank the attendees for sharing the day with me.

We’ve changed the format of the TechDay a bit and it turned out to be great:

  • The tutorial now is done in prepared laboratories in the cloud. This helped a lot for people with Windows or 32 bit laptops.
  • Instead of talks after the lunch there was a demonstration of OpenNebula and vCenter and a session of QA where the attendees could ask for specific features or how to accomplish some workflows.


LXDoNe – Lightweight Virtualization for OpenNebula

Operating system (OS) level virtualization is a technology that has recently emerged into the cloud services paradigm. It has the advantage of providing better performance, elasticity and scalability than para-virtualization or full virtualization. This happens because HVM hypervisors need to emulate hardware and use a new kernel for each virtual machine that will be deployed. OS level virtualization follows a completely different approach. This technology allows you to share the kernel with the virtual portion of the system, in other words, the kernel is the same for the host and the “virtual machines” running on top of it. A container is a virtual environment, with its own process and network space that makes use of Linux kernel Control Groups and Namespaces to provide hardware isolation. Containers have their own view of the OS, process ID space, file system structure and network’s interfaces. Since they use kernel features, and there’s no emulation of hardware at all, the impact on performance is minimal.


LXD is not just an Operating system level virtualization technology, it’s a hypervisor technology for containers. This means LXD containers look and act like virtual machines, but have the lightweight performance and scalability of process containers [1]. LXD has proven dramatic improvements over HVM hypervisors such as KVM [2] and ESX [3] in aspects such as density, speed and latency. 


LXDoNe is an add-on that allows OpenNebula  to manage LXD containers. It contains a virtualization and monitoring drivers. Right now it’s deployed in Universidad Tecnológica de La Habana José Antonio Echeverría‘s Data Center, alongside with KVM for special cases that implies a different kernel. We are actively working on adding more features and any reported bug will be prioritized by our team.

Features and Limitations.

Right now, the driver has the following features:

  • Life cycle control: deploy, shutdown, restart, reset, suspend and resume containers.
  • Support for Direct Attached Storage (DAS) filesystems such as ext4 and btrfs.
  • Support for Storage Area Networks (SAN) implemented with Ceph.
  • Monitor hosts and containers.
  • Limit container’s resources usage: RAM and CPU.
  • Support for VNC sessions.
  • Deploy containers with several disks and Network Interface Cards (NICs)
  • Support for dummy and VLAN network drivers.
  • Full support for OpenNebula’s contextualization in LXD containers (using special LXD images that will be uploaded to the market).

Features we are currently working on, most of them should be ready in the next couple of weeks:

  • Migration.
  • Snapshots.
  • Hot attach and detach NICs and disks.
  • LVM support.
  • Bandwidth limitation.

Known bugs:

  • VNC session only works with the first machine on each node. This is the reason for this behavior.

The driver has been released here. You can check there the installation guide for start using LXD with OpenNebula.

Contributions, feedback and issues are very much welcome by interacting with us in the GitHub repository or writing a mail:

  • Daniel Clavijo Coca:
  • José Manuel de la Fé Herrero:
  • Sergio Vega Gutiérrez:

Announcing the OpenNebula Champion Program

We would like to officially announce the OpenNebula Champion program. If you attended our last Conference, you’re probably aware that for the past couple of months, we’ve been designing a program for our community members who are interested in becoming Champions for OpenNebula around the world.

Champions are passionate volunteers who work to connect, teach and spread OpenNebula. They represent OpenNebula and help to teach and spread OpenNebula, throughout the world. In the project web site you have complete information what the program entails, how the project is planning to support its Champions and how to become an OpenNebula champion.

This is the list of the first ever Champions of the OpenNebula project:


During the last years we have traveled all over the world delivering talks and hands-on workshops and organizing TechDays and Tutorials, where we have met a lot of amazing, enthusiastic, resourceful and engaged people. In order to continue with our journey, in a few days we will announce the schedule of OpenNebula Conferences, TechDays and Training for 2017. These events provide a great opportunity to raise awareness for the project and get more of you involved as contributors and users. As we scale the project to the next level, we need your help in spreading the message.

We look forward to your participation, and we would like to heartily thank our Champions and TechDay hosts!

New Maintenance Release 5.2.1

On behalf of the team we are happy to announce maintenance release 5.2.1. This version fixes some problems found since 5.2.0 and adds some small functionality.

New functionality for 5.2.1 has been introduced:

The following issues has been solved in 5.2.1:

Relevant Links

OpenNebula 5.4 – Enhancements in vCenter Integration

The development of OpenNebula 5.4 has already started, and the biggest focus for this release is improving the vCenter integration. In particular, two areas will be addressed, networking and storage, in order to bring the vCenter up to speed with the open source hypervisor, KVM. After this release there will be virtually no differences between managing VMs on VMware or on KVM.


The goal in the vCenter network integration is to be able to create Networks (ie Port Groups and Distributed Port Groups) from within OpenNebula, not just consume then. We will be using the same mechanism as in KVM, ie, configuring the underlying network dynamically as required by the Virtual Machines, including the management of VLAN IDs.



There is a clear benefit from this new feature, as it implies a great improvement to the provisioning model. Typical use cases involves administrators creating groups of users or even VDCs on a multi-tenancy environment, where network isolation is a must. Currently, the administrators are required to create beforehand either Port Groups or Distributed vSwitches in the vCenter in order to provide tenants with isolation.

The second step is a big enterprise, which will take more than just one release to accomplish, which  is to offer this functionality directly in OneFlow, with the ability to create new networks automatically whenever a service is launched. This will allow tenants to spin up network isolated multi-vm services without further management steps in vCenter or OpenNebula.



The aim here is to improve the vCenter storage drivers to be at the same level than the KVM counterpart, enabling:

  • System Datastores
  • Non persistent images

This again is a big enterprise that will take more than one release to accomplish.

This functionality will enable important features for the vCenter integration, like for instance the ability to allow the OpenNebula scheduler to intelligently choose the target system datastore (replacing Storage DRS!), support for storage quotas, disk resize both at boot and running time, and much more!


The Network and Storage import process will be greatly improved. Whenever a VM Template is imported, all the NICs and DISKs of the VM Template will be imported, and associated networks, datastores and images will be created.

Additionally, improvements in the driver efficiency will ensure the scalability of OpenNebula with very large scale infrastructures. The main focus will be time and memory efficiency, mainly in the monitoring operations.

OpenNebula roadmap is strictly based on your input. Thanks for your amazing feedback!

9 Years of OpenNebula!


Today is the 9th anniversary of the founding of the OpenNebula open source project. We still remember the early days of OpenNebula, when we decided to start an open-source project to grow, distribute and support the software prototypes that we developed to support our research in virtualization management on distributed infrastructures. Yet here we are, nine years later, with one of the most successful projects in the open cloud ecosystem.

OpenNebula has grown into a widely-used user-driven project, with lots of examples of production use in academia, research and industry, over 100K downloads from the project repositories in the last year, more than 1,500 clouds connected to our marketplace, and large-scale infrastructures with tens of data centers and hundreds of thousands of cores. We are very proud of this growth as the only fully-open enterprise-ready management platform to build clouds.

None of it would be possible without your help, the hard work of thousands of users and contributors!

On behalf of the OpenNebula Team

How We Use OpenNebula at Fuze

Fuze, headquartered in Cambridge, MA with additional locations in North America, Europe, Asia and Australia, is a global, cloud-based unified communications platform that empowers productivity and delivers insights across the enterprise through a single unified voice, video, and messaging application. Fuze allows the modern, mobile workforce to seamlessly communicate anytime, anywhere, across any device. With Fuze, customers have a single global carrier network leveraging a resilient private cloud infrastructure and QoS-engineered network to deliver the best enterprise-class IP PBX voice service available.

Given the company’s record momentum and growth, scaling the private cloud infrastructure to meet the global expansions and growth is a key to its customers’ satisfaction. In mid-2015, we deployed OpenNebula as part of the Fuze Private Cloud Management Stack to achieve a continuously reliable growing private cloud spanning Fuze global infrastructure.

Here’s a quick look at the key features of OpenNebula that powered Fuze Private Cloud:

  • Simple and lightweight deployment and easily extendable being open source.
  • Self-Serve capability through GUI (Sunstone), CLI and API.
  • Multi-tenancy built-in providing segregation among internal workloads.
  • Supporting Agile DevOps by abstracting underlying infrastructure and supporting mainstream hypervisors besides public cloud bursting.
  • Automated VM Orchestration via contextualization that personalize instances on instantiation time.
  • Service Orchestration provided by OneFlow and OneGate that support multi-tier applications using role based instances. Besides, OneFlow has the ability to apply auto-scaling and recovery rule support based on either generic or custom application attributes pushed via OneGate.

Currently, Fuze Private Cloud has a central OpenNebula deployment in a data center in the eastern part of the US. This fault tolerant OpenNebula deployment is configured to connect and manage Fuze data centers across US East/ West, Australia, Asia, and Europe.


VMware ESXi/vCenter 6.0 is the dominant virtualization technology used across its data centers backed by SAN-based storage and we also use Amazon Web Services. OpenNebula leverages the SOAP API offered by each DC vCenter to manage the entire global cloud infrastructure and present it to the different engineering teams as an effective single orchestration pane of glass. In addition, we connected OpenNebula to a few AWS Virtual Private Cloud (VPCs) with dedicated tenancy that it bursts into where applicable.

We are currently running OpenNebula 5.0 which brought in enhanced user experience through Sunstone and renamed lifecycle states along a redesigned Marketplace. We will upgrade to OpenNebula 5.2 soon to make use of the upgraded hybrid cloud drivers.

Overall, the OpenNebula project has taken Fuze Global Private Cloud to the next level and continues to be a fundamental factor for continuous innovation and customer satisfaction.

New OpenNebula VCLOUD driver: Building Hybrid Clouds with VMware cloud providers

Based in its definition, the “Hybrid Cloud Computing” is a model which combines the use of multiple Cloud services across different deployment models, including combining the use of services of public cloud services private cloud outside or inside organization / institution.

Most companies and organizations have not been born in the “cloud”, a situation that often causes the cloud resources are to be connected to traditional systems or applications with some criticality and are usually located in their own premises. This type of architecture is the most common where the keys to their success pass take into account aspects such as integration capabilities, hyper-converged management, etc.

Cloud bursting is always welcome!

Today we are one_vcloudsharing exciting news about the expansion of the number of public clouds supported by OpenNebula to build hybrid cloud deployments. As a result of the collaboration between OpenNebula and CSUC, a new addon to support VCLOUD providers has been added to OpenNebula catalogue.

“With this addon, real hybrid architectures can use OpenNebula’s rich set of infrastructure management tools to manage cloud deployments across VCLOUD private, public and hosted cloud platforms.”


The driver is developed for Opennebula 5.x and VCLOUD 5.5 version and is released today to be available for testing. The integration has been carried out using the ruby_vcloud_sdk, which interacts with the vCloud Director API, enabling a complete control of the lifecycle of Virtual Machines in a transparent way within an OpenNebula cloud. Thanks to these new addon, private resources can be easily supplemented with resources from external providers to meet fluctuating demands.


This addon gives Opennebula the posibility to manage resources in VMware vCloud infraestructures. It includes virtualization and monitoring drivers.

This driver is based on vCenter Driver and uses a modified version of ruby_vcloud_sdk.

Alt text


This addon has the following capabilities:

  • Deploy, stop, shutdown, reboot, save, suspend, resume and delete VM’s in the Virtual Data Centers hosted in vCloud.
  • Create, delete and revert snapshots of VM’s.
  • Change RAM and CPU values of VM.
  • It’s able to hot-attach and detach NICs to VM’s.
  • Automatized customization of the VMs instanciated.
  • Obtain monitoring information from the VDC, Datastore and VM’s.
  • In this development version we manage vApps with one VMs inside (A VM in OpenNebula equals a vApp with one VM in vCloud).
  • Each Virtual Data Center (VDC) in vCloud is managed as a Host in OpenNebula.
  • Import networks, hosts, templates and datastores hosted in vCloud using onevcloud script.

Need more information? You are welcome to use the OpenNebula community instruments to ask around (for instance, the forums tool is a good place to pose your questions) or reserve a seat to see details inside the next Open Cloud Free session in Barcelona (24/10 14:00h)

As always, we value your feedback and contributions to this new feature!

Barcelona UserGroup Team –