We at TeleData are operating several OpenNebula instances in our datacenters and offer public Cloud services like IaaS, PaaS and scalable Hosting and ISP-Services for our business customers in Germany.

A professional 24/7 monitoring our whole infrastructure is a substantially element for our customers. In this case technical solutions have to offer the ability to drill down the hierarchy to identify bottlenecks and do some crucial capacity management. Every OpenNebula user has to know his workload and has to plan his resources. At TeleData we use a N+1 redundancy to cover host errors and so on.

With this strategy you cannot allocate 100% of your available ressources and you have to monitor your current allocations. Also the possibility to identify your most loaded VMM hosts or your most utilized vDCs could be a valuable information to do system and resource management the right way.

In this little guide we will try to get some monitoring data in form of graphite metrics for further, deeper and last but not least, long-term monitoring.

We think it was quite easy to pull out the information of OpenNebula. Therefore we developed a little script which collects the wanted metrics and sends them to Graphite.

First a small diagram for a quick overview:

Diagram OpenNebula - Graphite - Grafana


Lets start.
(This guide assumes that you are already familiar with Graphite and Grafana)


  1. Gathering the wanted Information
    As advised in the OpenNebula Forum [1] we used the XML-Output of the common tools “onehost” and “onegroup” in the OpenNebula Controller/Daemon, executed by Cronjob every 5 Minutes. The script is Ruby based and uses common Gems like “nokogiri” for XML operations.
  2.  Send out metrics to Graphite
    The script generates Graphite metrics out of the desired information and pushes them to your Graphite server. Graphite stores the metrics according to your storage schema (the Syntax is Frequency:Retention)
  3.  Display / Graph your metrics
    With your time series data in Graphite you could use shiny tools like „Grafana“ [2] or „Dashing“ [3] to create informative and quite impressive Dashboards for your OpenNebula OPS Team. With some templating (included in our JSON-Exports)  you can unchain the power of Grafana.

See some examples:


All work is published at Github: one-graphite
If there are any questions or issues feel free to add your comment at the OpenNebula Forum [1].

In our opinion OpenNebula works like a charm. It is open and flexible. OpenNebula is one of the most mature, comprehensive and valuable Cloud-Stack available in the market. We mentioned that before: http://opennebula.org/opennebula-at-teledata/

Because of the lack of complexity it`s simple to enhance and to add features.
Of course our Graphite cluster is also powered by OpenNebula, like many other Services.

Have fun!



[1] https://forum.opennebula.org/t/long-term-statistics-and-capacity-management-for-opennebula-clouds/1886/3
[2] http://grafana.org/
[3] http://shopify.github.io/dashing/

Operating-system-level virtualization, a new technology that has recently emerged and is being accepted into cloud infrastructures, has the advantage of providing better performance and scalability than other virtualization technologies such as Hardware-Assisted Virtual Machine (HVM).

containersLinuX Containers (LXC) allow the usage of this technology by creating containers that resemble complete isolated Linux virtual machines on the physical Linux machine, all this by sharing the kernel with the virtual portion of the system. A container is a virtual environment, with its own process and network space. LXC makes use of Linux kernel Control Groups and Namespaces to provide the isolation. Containers have their own view of the OS, the process ID space, the file system structure, and the network’s interfaces. Since they use kernel features, and there’s no emulation of hardware at all, the impact on performance is minimal. Starting up and shutting down containers, as well as creating or destroying them are fairly quick operations. There have been comparative studies, such as LXD vs KVM by Canonical, which show advantages of LXD systems over KVM. LXD is built on top of LXC and uses the same kernel’s features, so performance should be the same.

hyp2Nowadays public cloud Infrastructure as a Service (IaaS) providers, like Amazon, only offer application based on Docker containers deployed on virtual machines. Docker is designed to package a single application with all of its dependencies into a standardized unit for software development but not to create a virtual machine. Only a few systems offer IaaS on bare metal container infrastructures. Joyent, for instance, is able to put to use all of the virtualization advantages that OS provides.

However, on private cloud scenarios OS virtualization technology hasn’t had quite the acceptance it should. Private cloud managers, like OpenStack and Eucalyptus, don’t offer the support needed for this type of technology. OpenNebula, is a flexible cloud manager, which has gained very good reputation over the last few years. Therefore, to strengthen the OS virtualization support in this cloud manager could be a key strategic decision.

This is why LXCoNe was created. It is a virtualization and monitoring driver for OpenNebula that comes as an add-on to provide OpenNebula with the ability to deploy LXC containers. It contributes to achieve better interoperability, performance and scalability in OpenNebula clouds. Right now, the driver is stable and ready for its release. It is currently being used in the data center from Instituto Superior Politécnico José Antonio Echeverría in Cuba, with great results. The team is still working on adding some more features, which are shown next, to improve the driver.

Features and Limitations

The driver developed has several features such as:

  • Deployment of containers on File-Systems, Logical Volume Managers (LVM) and CEPH.
  • Attachment and detachment of network interface cards and disks, both before creating the container or while it’s on.
  • Monitoring containers and node’s resources usage.
  • Powering off, suspending, stoping, un-deploying and rebooting running containers.
  • Supports VNC.
  • Supports snapshots when using File-Systems.
  • Limits container’s RAM usage.

It lacks the following features, on which we are currently working:

  • Container’s CPU usage limitation.
  • Containers live migration.

Virtualization Solutions

OpenNebula was designed to be completely independent from underlying technologies. When the project started the only supported hypervisors were Xen and KVM, it was not thought out to support OS-level virtualization. This probably influenced the way OpenNebula managed physical and virtual resources. Because of this and due to the difference between the two types of virtualization technologies there are a few things to keep in mind when using the driver. These are:


When you successfully attach a hard drive, this will appear inside the container, in /media/<Disk_ID>. For detaching the hard drive it must be placed inside the container in the same path previously explained. It cannot be in use, otherwise it will purposely fail.

Network interfaces (NIC)

If you hot-attach a NIC, it will appear inside the container, but without any configurations. It will be up and ready to be used, contrary to what happens when you specify NICs in the template and then create the virtual machine. In this case, the NIC will appear set up and ready to be used, unless you specifically want it to appear otherwise.


Want to try? The drivers are part of the OpenNebula Add-on Catalog. Installation process is fully and simply explained in this guide.

Contributions, feedback and issues are very much welcome by interacting with us in the GitHub repository or writing a mail:

José Manuel de la Fé Herrero: jmdelafe92@gmail.com
Sergio Vega Gutiérrez: sergiojvg92@gmail.com


OpenNebula Systems has announced an extension of its support services aimed at those OpenNebula users in production environments based on KVM that need to support the whole stack. Hence, support can be extended to include the Operating System of the virtualization nodes and the controller (which can be CentOS and/or Ubuntu), the hypervisor (libvirt and KVM), networking technologies as VXLAN/VLAN 802.1Q and Ceph as storage backend.

The supported components are part of the Open Cloud Reference Architecture, created by OpenNebula Systems from the collective information and experiences from hundreds of users and cloud client engagements. This reference documents software products, configurations, and requirements of infrastructure platforms recommended for a smooth OpenNebula installation.

This new support coverage is offered as an Add-on over the traditional OpenNebula support.


Screen Shot 2015-12-21 at 3.36.51 PM


As you may already know, this year OpenNebulaConf is taking place in Barcelona again, on October 25-27. Everybody who wants to join as a speaker can now propose a talk via the online form at:

We are looking for content that is appropriate for people who are brand new to OpenNebula, experts, and everything in between. First Time Submitting?. Don’t Feel Intimidated. We know that the Cloud and open source community can be very intimidating for anybody who is interested in participating. We strongly encourage first-time speakers to submit talks.

If you are a OpenNebula practitioner, user, architect, devop, admin or developer and have something to share, we welcome your submission. Suggested topics will include:

– Latest developments in OpenNebula
– Research using OpenNebula
– User experiences and case studies using OpenNebula
– Best practices and tools
– Integration with other cloud, virtualization and data center components
– Any other topics that you feel are relevant to developers, users, researchers and other members of the community

Need inspiration?

Need help with or suggestions for your presentation? We’ve got lots of ideas and are happy to discuss your talk ideas before you submit them. Just reach out to us!


The agenda will include a one-day pre-conference (October 25) with tutorials and hacking session, and a two-day conference (October 26 and 27) with keynotes, regular sessions, lightning talks and open sessions.


Speakers will receive free admission, which includes:

– Attendance at all conference presentations
– Attendance at pre-conference tutorials and hacking sessions
– Coffee break during the morning and afternoon breaks
– Lunch on both conference days
– Dinner event on the first conference day
– Tapas dinner on the pre-conference day
– WiFi access

… and the opportunity to address a large audience of talented and influential cloud and open-source experts!


Deadline is April 29th 2016. Speaker selection notifications will go out no later than May 6, 2016 at 11:59PM CET.

Each presentation slot will be approximately 20 minutes long. Subsequently to each presentation a question and answer session is scheduled with the audience. The agenda will also include some lightning talks and some open sessions, too. You can also submit panels, laboratories or multi-presenter proposals.

If you want to get an idea of the past OpenNebulaConf sessions, including talks from companies such as CentOS, Runtastic, Puppet Labs, Cloudweavers, RedHat, Produban, Unity, Deutsche Post, please check our Youtube channel or download the presentations from our SlideShare account

We are looking forward to your talks!

Following our series of posts about OpenNebula and Docker, we would like to showcase the use of Docker Swarm with OpenNebula.

Docker Swarm is native clustering for Docker. With Docker Swarm you can aggregate a group of Docker Engines (running as Virtual Machines in OpenNebula in our case) into a single Virtual Docker Host. Docker Swarm delivers many advantages, like scheduling, high availability, etc.

We continue with our approach of using Docker Machine with the OpenNebula Driver plugin, in order to deploy Docker Engines easily and seamlessly. Please make sure to follow the previous post, in order to have a fully functional Docker Machine working with OpenNebula.

As displayed in the following image, Docker Swarm will make a cluster out of a collection of Docker Engine VMs deployed in OpenNebula with Docker Machine:


Docker Swarm makes use of a Discovery Service in order to implement the cluster communication and discovery. The Docker project provides a hosted discovery service, which is appropriate for testing use cases, however, in our case, we will use Docker Machine to deploy an instance of Consul, in particular this Docker container for Consul.

NOTE: This guide is specific for KVM, if you would like to try this plugin out with the vCenter hypervisor, or with vOneCloud, there are a few small differences, so make sure you read this: Docker Machine OpenNebula plugin with vCenter. In particular you will need to use the option –opennebula-template-* instead of –opennebula-image-*.

The first step is to deploy it using Docker Machine:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 consul
$ docker $(docker-machine config consul) run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
$ CONSUL_IP=$(docker-machine ip consul)

Once it’s deployed, we can deploy the Swarm master:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 --swarm --swarm-master --swarm-discovery="consul://$CONSUL_IP:8500" --engine-opt cluster-store=consul://$CONSUL_IP:8500 --engine-opt cluster-advertise="eth0:2376" swarm-master

And now deploy swarm nodes:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 --swarm --swarm-discovery="consul://$CONSUL_IP:8500"--engine-opt cluster-store=consul://$CONSUL_IP:8500 --engine-opt cluster-advertise="eth0:2376" swarm-node-01

You can repeat this for as many nodes as you want.

Finally, we can connect to the swarm like this:

$ eval $(docker-machine env --swarm swarm-master)
$ docker info
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 2
Server Version: swarm/1.1.2
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
└ Status: Healthy
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.18-boot2docker, operatingsystem=Boot2Docker 1.10.2 (TCL 6.4.1); master : 611be10 - Tue Feb 23 00:06:40 UTC 2016, provider=opennebula, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-29T16:08:41Z
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.18-boot2docker, operatingsystem=Boot2Docker 1.10.2 (TCL 6.4.1); master : 611be10 - Tue Feb 23 00:06:40 UTC 2016, provider=opennebula, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-29T16:08:12Z
Kernel Version: 4.1.18-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.043 GiB
Name: swarm-master

The options cluster-store and cluster-advertise are necessary to create multi-host networks with overlay driver within a swarm cluster.

Once the swarm cluster is running, we can create a network with the overlay driver

$ docker network create --driver overlay --subnet= overlay_net

and then we check if the network is running

$ docker network ls

In order to test the network, we can run an nginx server on the swarm-master

$ docker run -itd --name=web --net=overlay_net --env="constraint:node==swarm-master" nginx

and get the contents of the nginx server’s home page from a container deployed on another cluster node

docker run -it --rm --net=overlay_net --env="constraint:node==swarm-node-01" busybox wget -O- http://web

As you can see, thanks to the Docker Machine OpenNebula Driver plugin, you can deploy a real, production ready swarm in a matter of minutes.

In the next post, we will show you how to use OneFlow to provide your Docker Swarm with automatic elasticity. Stay tuned!

Latest developments, events and future plans for the upcoming months from the OpenNebula project. Read this newsletter to keep up to date with your favorite Cloud Management Platform.

You might be interested in taking a look at the sponsorship opportunities for the next OpenNebula Conf 2016, due in October in Barcelona. Also it may be of your interest the projected OpenNebula TechDays for this year. Upcoming TechDays in March and April will be held in Madrid, Dallas and Toronto.


The OpenNebula team keeps working in OpenNebula 5.0, the next major upgrade that will incorporate a wealth of new features, keeping a smooth upgrade path. And no, this is no easy task folks. But we will make it happen.

OpenNebula 5.0 is a revolution around the corner. In a few weeks, a beta version will come out featuring the following highlights:

  • revamped Marketplace, now a first class citizen in OpenNebula, to import/export images to and from any datastore (including the new vCenter datastores)
  • vCenter storage management, including datastore selection and disk hotplug, a full integration with the new Marketplace functionality. Share VMDKs among OpenNebula instances!
  • fully integrated virtual router management (including default and transparent HA for routers). Link virtual networks together using a robust HA router, that will come out of the box in OpenNebula

5.0 will also feature a myriad of interface changes, including an extension of the Cloud View available operations, to iron out the wrinkles in the user experience and a long list of other features.

As you may have noticed, interest in Docker is surging. We see OpenNebula as the perfect IaaS support for Docker, and as such our strategy regarding integration of Docker ecosystem within OpenNebula is to follow a “container as a service” approach. This is containers within VMs, not replace KVM/ESX with containers to ensure we maintain strong multi-tenancy. First we want to integrate OpenNebula with Docker-Machine to be able to build a container environment within your cloud. This feature is very advanced, as you can see in blog post. The second step would be to integrate tightly with Docker Swarm to be able to create Swarms using the OneFlow component.



Lots of exciting things are happening in the OpenNebula community. Collaborations, contributions, and yes! even praise. Let’s review the highlights.

LINBIT has developed a new set of drivers to give OpenNebula support for the excellent DRBD functionality. Check an informative demo in this video.

A new release of the Perl binding for the OpenNebula Cloud API (OCA) is also worth noting if Perl is your sysadmin scripting language of choice. No excuse to make all the chores automatically in your cloud infrastructure now!

A freshly brewed integration of OpenNebula with professional VDI platform has been announced by UDS Enterprise, to be available in the upcoming 2.0 release of their software.

Our friends at NodeWeavers never stop. Take a look at their new family addition. Cute is an understatement, and since it is based on OpenNebula, powerful another one.

World wide adoption of OpenNebula is a fact. If more proof is needed, this is an excellent one. OpenNebula has been translated to the Persian language! Check this out.

Overall, we are very proud of the OpenNebula community. OpenNebula won’t be half as good without you guys, keep it up!


The next OpenNebula Conference in Barcelona in October 2016 has already been announced . If you are willing to attend and can save now the date you can take advantage of a 40% discount in your Conf tickets. More information is available from the event site. The Conf has already its first sponsor with StorPool. Welcome! Learn about the different sponsorship opportunities in the Conference web page. If you want to understand what all the OpenNebula Conference fuzz is about, check the last Conference material (talks, slides,pictures).

Two OpenNebula TechDays, from this year’s scheduled ones have already been hosted. The first on was the TechDay in Kuala Lumpur, at MIMOS, where we learned about the wide OpenNebula ecosystem that is fostering in Malaysia. We want to thank all the people at MIMOS who received us for making this possible. The second TechDay happened in Sofia, Bulgaria, hosted by StorPool. We also want to thank StorPool for the magnificent hosting and the excellent local promotion, which yielded an excellent number of attendees.


If you are interested in participating in (or hosting) any of these TechDays agenda let us know. The next TechDay will happen this month hosted by Rentalia, and it will be held in Madrid, Spain.

The Swiss Open Systems User Group would be gathering next 16th of June to discuss Cloud Computing. If you are around and want to learn more about Clouds and OpenNebula, drop by.

You may be interested in OpenNebula Systems training plans for 2016 for Europe and US. These courses are designed to train cloud administrators to properly operate an OpenNebula infrastructure. Please contact us if your would like to request training near you.

Members of the OpenNebula team will be present in the following events in upcoming months:

  • VMworld 2016 US, August 28 – September 1, Las Vegas (Mandalay Bay Hotel & Convention Center), Nevada, US.
  • VMworld 2016 Europe, October 17 – 20, Barcelona (Fira Barcelona Gran Via), Spain.

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.