Speak at OpenNebulaConf 2019 in Barcelona, Spain!

It’s great to attend the OpenNebulaConf, yet being a speaker – even moreso! Come share your insights and experiences with the user community. Whether you are a seasoned speaker or a first-timer, it’s of little consequence. This is a great opportunity to connect with your peers, and to collaborate with the broader OpenNebula community.

Presentation topics are wide open. If you have a dynamic perspective or unique experiences to share, submit a proposal!

Check out the details and sign up.

We look forward to welcoming you in Barcelona!


A new appliance in the Marketplace:  Kubernetes = K8s

We are happy to announce a new addition to our steadily growing OpenNebula’s Marketplace. This time we are bringing to you the most popular container orchestration platform – Kubernetes. As with the previously introduced appliances (you can read more about them in our previous blogpost), our Kubernetes appliance too gives you a “press-of-a-button” simple option of how to create and deploy a functional service.

In the past, Kubernetes was notoriously hard to setup, that is the reason why projects like Rancher sprung up…(Do you want it as a future appliance? Let us know!) We also tried to make the creation of K8s clusters much simpler for you. The appliance supports multiple contextualization parameters to bend to your needs and to your required configuration. This works in very much the same spirit as all the other ONE service appliances.

On top of this, we extended the simplicity and versatility of this appliance’s usage with OneFlow service support, which makes perfect sense for Kubernetes clusters. Now you can deploy a whole K8s multi-node cluster with just one click. More info can be found in our Service Kubernetes documentation.

Kubernetes, Docker, microservices, containers and all those other trendy cloud technologies and terminologies can become confusing sometimes and not everyone is fully-versed in these new topics. (Did you hear about DevOps and CI / CD?) So we better clarify what exactly our Kubernetes appliance does for you and what it doesn’t.

This service appliance provides you with a K8s cluster (one master node and arbitrary number of worker nodes – including zero). Every node is just a regular VM, with which you are familiar. OpenNebula does NOT manage containers or pods inside a created K8s cluster. When you deploy this service appliance, you get a K8s cluster which exposes the Kubernetes API (on a designated ip address of the master node). You can access it via kubectl or UI dashboard (the picture below) to create pods, deployments, services etc. You can also add more nodes to the cluster any time later using the contextualization. But other than that, you are in charge and it is up to you to keep it up and running.

Have a look in the OpenNebula Marketplace.

Check out the video screencast of how to get started with the k8s appliance


OpenNebula Edge – Maintenance release v.5.8.1 is now available!

There’s plenty to be excited about with 5.8 Edge – and now we have released a maintenance release v.5.8.1, with bug fixes and a set of new minor features, which include:

  • Add timepicker in relative scheduled actions
  • Check vCenter cluster health in monitoring
  • Implemented nested filters AND and OR when filtering from CLI
  • Added input for command to be executed in the LXD container through a VNC terminal
  • Updated ceph requirements for LXD setups
  • Extended logs in LXD actions with the native container log
  • New API call one_vmpool_infoextended
  • Added sunstone banner official support

Check the release notes for the complete set of new features and bug fixes.

Relevant Links

OpenNebula Systems has just announced the availability of vOneCloud version 3.4.

vOneCloud 3.4 is powered by OpenNebula 5.8 “Edge”, and, as such, includes functionalities present in Edge relevant to vOneCloud:

  • Change boot order of VM devices updating the VM Template. More info here.
  • VM migration between clusters and datastores is now supported, check here.
  • Migrate images from KVM to vCenter, or vice versa. More info here.
  • New configuration file, default behaviour in the process of image importation can be changed. More info here.
  • VM actions can be specified relative to the VM start scheduled actions, for example: terminate this VM after a month of being created.
  • Automatic selection of Virtual Networks for VM NICs, balance network usage at deployment time or reduce clutter in your VM Template list. More info here.
  • New self-provisioning model for networks, Virtual Network Templates. Users can now instantiate their own virtual networks from predefined templates with their own addressing.
  • Support for NIC Alias, VM’s can have more than one IP associated to the same network interface. More info here.

Multiple bugfixes and documentation improvements have been included in this version. The complete list of changes can be checked on the development portal.

vOneCloud 3.4 has been certified with support for vSphere 6.0, 6.5 and 6.7.

If you are looking for additional details about OpenNebula integration with VMware, check out the recently published VMware Solution Brief, as well as this OpenNebula-VMware blog post.

Relevant Links:


Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.


In the month of March we have maintained a sharp focus on activities surrounding last month’s release of v.5.8 Edge. We have been working on bug fixes, and have an upcoming planned release of v.5.8.1 scheduled for early April.  At the same time, the internal discussions around “what’s to come” for an upcoming v.5.10 have already begun.

We did take the time to carry out a thorough Scalability Test and Tuning exercise to breakdown the details of how well OpenNebula 5.8 scales, and what are some of the recommendations for achieving optimal performance and scalability. For detailed info on Scalabilty Testing and Tuning, check out our reference documentation.

We also reviewed and posted how to take advantage of the OpenNebula 5.8 Edge with LXD support, and getting going with a quick installation using miniONE. This is a perfect way to test out v.5.8, starting with AWS and setting it up from start-to-finish in a few minutes.  You can check out the video screencast of the step-by-step instructions.


OpenNebula’s partnership, and moreso its native integration, with VMware was highlighted in fine form this month. An OpenNebula-VMware Solution Brief was published on the VMware Solutions Exchange website. Additionally, we published a solution overview on the VMware Eco-System Partners site.

We also released the 2018 OpenNebula Survey results, which provides a comprehensive look into the development and progression of the OpenNebula project, the growth and evolution of its usage, and a glimpse into what the Community is looking for from OpenNebula in the future.

Lastly, we communicated our shift towards using a Developer Certificate of Origin (DCO) to manage the code contribution process. This allows us to move away from the Contributor License Agreement (CLA) mechanism which we have been using up until now, and to adopt a more universally  appealing approach with the DCO.


As warmer weather approaches, (at least here in the Northern hemisphere), so does the OpenNebula TechDay season. Our first two TechDays of 2019 are approaching at the beginning of May:

  • May 8, 2019 – Barcelona, Spain – hosted by CSUC
  • May 16, 2019 – Sofia, Bulgaria – hosted by StorPool

Agendas are getting pulled together and will soon be published for your review!  Remember, these are FREE one-day events, laden with technical insight from users, Tutorials provided by OpenNebula Systems, and a great opportunity to network with the Community.

And don’t forget to plan ahead for OpenNebulaConf 2019 in Barcelona, Spain on October 21-22.

Stay connected!

One of OpenNebula’s main features is its low resource footprint. This allows OpenNebula clouds to grow massive without a big impact on demanded hardware. There is a continuous effort from the team behind OpenNebula’s development related to efficiency and performance, and several improvements in this area have been included in the latest release, OpenNebula 5.8 “Edge”. The objective of this blog post is to describe the scalability testing performed to define the scale limits of a single OpenNebula instance (single zone). This testing and some recommendations to tune your deployment are described in the new guide of the OpenNebula documentation Scalability Testing and Tuning.

Scalability for OpenNebula can be limited on the server side, in terms of maximum amount of nodes/Virtual Machines (VM) in a single zone, and on the nodes side, in terms of maximum amount of VMs a single node is able to handle. In the first case, OpenNebula’s core defines the scale limit, while in the second case, it is the monitoring daemon (collectd) client. A set of tests has been designed to address both cases. The general recommendation is to have no more than 2,500 servers and 10,000 VMs, as well as 30 API load req/s, managed by a single instance. Better performance and higher scalability can be achieved with specific tuning of other components like the DB, using better hardware or adding a proxy server. In any case, to grow the size of your cloud beyond these limits, you can horizontally scale your cloud by adding new OpenNebula zones within a federated deployment. Currently, the largest OpenNebula deployment consists of 16 data center and 300,000 cores.

Environment Setup

Hardware used for tests was a Packet t1.small.x86 bare metal cloud instance. No optimization or extra configuration besides defaults was used for OpenNebula. Hardware specifications are described as follows:

CPU model: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz, 4 cores, no HT
RAM: 8GB, DDR3, 1600 MT/s, single channel
OS: Ubuntu 18.04
OpenNebula: Version 5.8
Database: MariaDB v10.1 with default configurations
Hypervisor: Libvirt (4.0), Qemu (2.11), lxd (3.03)

Front-end (oned core) Testing

This is the main OpenNebula service, which orchestrates all the pools in the cloud (vms, hosts, vnets, users, groups, etc).

A single OpenNebula zone was configured for this test with the following parameters:

Number of hosts: 2,500
Number of VMs: 10,000
Average VM template size: 7KBytes

Note: Although hosts and VMs used were dummies they represent an identical entry on the DB compared to a real host/VM with a template size of 7KBytes. For this reason, results should be the same as in a real scenario with similar parameters.

The four most common API calls were used to stress the core at the same time in approximately the same ratio experienced on real deployments. Total amount of API calls per second used were: 10, 20 and 30. In these conditions, with a host monitoring interval of 20 hosts/second, in a pool with 2,500 hosts and a monitoring period on each host of 125 seconds, the response times in seconds of the oned process for the most common XMLRPC calls are shown below:

Response Time (seconds)
API Call – ratio: API Load: 10 req/s API Load: 20 req/s API Load: 30 req/s
host.info (30%) 0.06 0.50 0.54
hostpool.info (10%) 0.14 0.41 0.43
vm.info (30%) 0.07 0.51 0.57
vmpool.info (30%) 1.23 2.13 4.18

Host (monitoring client) Testing

This test stresses the monitoring probes in charge of querying the state, consumption, possible crashes, etc. of both physical hypervisors and virtual machines.

For this test, virtual instances were deployed incrementally. Monitoring client was executed each time that 20 new virtual instances were successfully launched and before launching 20 additional virtual machines to measure the time needed to monitor every virtual instance. This process was repeated until the node ran out of allocated resources, which happened at 250 virtual instances, and OpenNebula’s scheduler was not able to deploy more instances. Two monitoring drivers were tested: KVM and LXD. These are the settings for each KVM and LXD instance deployed:

Virtual Instances OS RAM CPU
KVM VMs None (empty disk) 32MB 0.1
LXD containers Alpine 3.8 32MB 0.1

Results for each driver are as follows:

Monitoring Driver Monitor time per virtual instance
KVM IM 0.42 seconds
LXD IM 0.1 seconds


Since we founded the OpenNebula open-source project more than 10 years ago, we have been following the Contributor License Agreement (CLA) mechanism for software contributions that include new functionality and intellectual contributions to the software. Although CLA has been the industry standard for open source contributions to other projects, it’s largely unpopular with developers.

In order to remove barriers to contribution and allow everyone to contribute, the OpenNebula project has adopted a mechanism known as a Developer Certificate of Origin (DCO) to manage the contribution process. The DCO is a legally binding statement that asserts that you are the creator of your contribution, and that you wish to allow OpenNebula to use your work.

The text of the DCO is fairly simple and available from developercertificate.org. Acknowledgement of this permission is done by using a sign-off process in Git. The sign-off is a simple line at the end of the explanation for the patch. More info here:


We are looking forward to your valuable contributions!


OpenNebula Cloud Management on VMware vCenter

Companies’ data centers continue to grow, handling new and larger workloads, and ultimately making virtual infrastructure and cloud computing a “no-brainer”.  For those companies that have invested in VMware platform solutions, it shouldn’t be news that OpenNebula provides a comprehensive and affordable solution for managing one’s VMware infrastructure and creating a multi-tenant cloud environment. Full integration and support with VMware vCenter infrastructure has been a cornerstone feature of OpenNebula. And when questions like “How do I effectively turn my vSphere environment into a private cloud?” or “How can I best manage multiple data centers?“, “Is there an easier way to manage provisioning and to control compute workloads?“, or “How can I take advantage of public cloud offerings and seamlessly integrate them with my private, on-premises cloud?“, users with already established VMware infrastructure need to know that OpenNebula provides an inclusive, yet simple, set of capabilities for Virtual Data Center Management and Cloud Orchestration.

This OpenNebula-VMware Solution Brief provides an overview of the long-standing integration.

The highlights include:

  • OpenNebula offers a simple, lightweight orchestration layer that amplifies the management capabilities of one’s VMware infrastructure.
  • It delivers provisioning, elasticity and multi-tenancy cloud features including
    • virtual data center provisioning
    • data center federation
    • hybrid cloud capabilities to connect in-house infrastructures with public cloud resources
  • Distributed collections of vCenter instances across multiple data centers can be managed by a single instance of OpenNebula.
  • Public cloud resources from AWS and Microsoft Azure can be easily integrated into one’s OpenNebula cloud and managed like any of other private cloud resource.
  • And with the validation of OpenNebula on VMware Cloud on AWS, one can grow his or her on-premises infrastructure on-demand with remote vSphere-based cloud resources running on VMware Cloud on AWS, just as one could do with local VMware infrastructure resources. All this, in a matter of minutes.

The compatibility and features that OpenNebula offers to VMware users have been fundamental elements to our software solution for a long time running.  However, that doesn’t make it any less exciting to “spread the word”!



Executive Summary

We’d like to thank all of you who shared your perspective on OpenNebula as part of our 2018 Architecture Survey. This is the fourth architectural survey of OpenNebula since 2012, and the results were collected during the period of December 4, 2018 through January 11, 2019. Your participation here is fundamental to our strategic focus to best provide features and support that align with the infrastructures platforms and configurations demanded by you.

We have only included in the analysis the respondents that are using OpenNebula 5.x (latest series) and who we deem reliable because they have provided identification details that allow us to verify the answers of the survey. This is important given that our main aim is to have accurate and useful information about OpenNebula deployments. This survey is not a market survey and does not express all OpenNebula deployments worldwide.

The data provided helps to shed light on how OpenNebula is being used by the community, as well as providing some indicators of where to aim for the future. In comparing to our previous survey taken in 2015, there are some other notable findings:

  • The types of workloads handled by OpenNebula is evolving, with a growing proportion (85%) of them being run in a Production environment, while also having others running in Dev/Test (73%) and in a Proof of Concept mode (29%). OpenNebula shows its increasing maturity, compared with 73% of deployments in production reported in our previous survey.
  • A few other indicators of its developing stability are both the Number of Users supported by OpenNebula clouds, as well as the Number of Nodes.  For each of these metrics, there has been steady growth across the board.
    • The growth in user count for OpenNebula clouds has progressively increased with now close to 20% servicing more than 1,000 users.  Smaller OpenNebula clouds, of 100 users or less, now constitute 45%, which is down from 70% in 2015.
    • And “node count” has seen steady growth as well, with 77% of cloud environments having more than 10 nodes, (up from 56% in 2015). Similarly, we see that 29% of cloud environments have more than 100 nodes, where in 2015 that figure was 20%.
  • Usage of OpenNebula in Commercial and Industrial enterprises continues to grow.  Its employment in organizations across various industries shows a growing distribution with increasing usage in IT, Telecommunications and Internet, and Hosting and MSP’s, and Media and Gaming sectors – mounting to 75% of total usage.  And while still an important sector of OpenNebula usage, Academia and Research constitutes a smaller percentage of the overall – now 14%, down from 25% in our last survey.
  • Hybrid cloud usage continues to grow, with an increasing percentage of OpenNebula integration with AWS (39%), as well as with Microsoft Azure (22%).
  • With the advent of Configuration Management and Remote Execution tools, we see that a hefty segment of OpenNebula users (73%) are taking advantage of these to introduce automation to their environments.
  • A growing usage of CentOS as an Operating System has reached 50% across our survey participants, while usage of both Ubuntu and Debian have remained fairly steady at 42% and 22% respectively.

And OpenNebula continues in its aim to be the simplest and most flexible open source solution for private cloud and data center virtualization management.  Our user community rates it highly because of its Simplicity (83%), its Openness (69%) and Flexibility (56%), as well as its Vendor-neutrality (52%).

Thank you again for participating in our survey! Below you can review the detailed results.

A.  About the Organization

This year we broke out the demographics across multiple categories, with “Information Technology”, “Cloud Hosting and MSP’s”, and “Telco and Internet” companies assuming large portions of the demographics – but those added together with “Media and Gaming” and “Web, SaaS, and eCommerce” total up to a growing 75% of Industrial/Commercial users. 14% of users work in Science, R&D, and Academia, while just under 11% work for Government and Non-Profits.

Type of Organization


Usage by larger companies has steadily grown over time, now reaching 20% of companies with 5,000 employees or more, compared to 13% from our last survey.  While smaller companies – (of 500 employees or less) – continue to be avid users of OpenNebula, that demographic constitutes a smaller base of 58%, compared to 64% from our last survey.


Size of Organization (# of employees)


48% of deployments are located in Europe or in Russia, a slight shift downward from 50% in our last survey. We also see a small tick upward in growth within North America to 33% from 30%. These two geographic regions continue to lead the usage of OpenNebula by a large margin.

Geographic Region


B. About Cloud Usage

The usage of OpenNebula for Production workloads has seen steady growth to 85%, from 73% in 2015. Its use for Development and Testing has remained firm at 73%, while there is significant Proof of Concept work being done, as well (19%).

Type of Workload (allows for multiple selections)


It is logical that the “On-premise private cloud” remains the most common type of cloud being built with OpenNebula, with 78% of respondents confirming it being part of their cloud infrastructure.  However, again the widening types of clouds being created is evident with 54% of users venturing into “Hybrid private clouds”, 55% creating “Federated private clouds”, and 17% and 26% creating “Hosted private clouds” and “Distributed private clouds”, respectively.  Even 8% are beginning to venture in creating “Edge private clouds”. With the imminent release of v.5.8 Edge, it will be interesting to see how that last figure grows.

Type of Cloud Architecture (allows for multiple selections)


And from a use case perspective, 71% of organizations use OpenNebula for “Data Center Virtualization management”, and 48% use it to establish “Public Clouds, VPS and MSPs”.  39% of users take advantage of OpenNebula to create cloud environments on top of their VMware infrastructure, and 37% are laying the groundwork for Enterprise clouds.

Type of Use Case (allows for multiple selections)


Since 2015, the number of users in most OpenNebula clouds has seen significant growth.  Clouds with more than 1,000 users have reached 19% from 8%, while the smaller ones of 100 users or less now constitute 45%, from 70% in 2015.

Number of Users


Another new question we introduced this year had to do with understanding with which other Cloud providers users interact.  Amazon Web Services (AWS) is, by far, the most common at 61%, with Microsoft Azure following at 35%.

Interaction with Other Cloud providers (allows for multiple selections)


C. About Cloud Configuration

59% of OpenNebula environments are “federated”, meaning that they have more than a single zone, and 9% are running more than 10 different zones.  This is up from 51% and 5%, respectively, from the last time we collected data.

Number of Zones (OpenNebula instances)


Node count – another metric in measuring cloud size – has steadily increased, as well.  77% of cloud environments have more than 10 nodes, which is a significant increase from 2015 when this measure was 56%.  And currently 29% of cloud environments have more than 100 nodes, where in 2015 that figure was 20%.

Number of Nodes


KVM hypervisors remain the most commonly used hypervisors in OpenNebula environments. Yet, usage with VMware hypervisor continues to be a very solid use-case. The percentage of KVM users has stayed fairly steady, at 75% compared to 73% previously.  And usage with VMware hypervisors has increased slightly, at 39% compared to 37%.

Hypervisor usage (allows for multiple selections)


56% of users have implemented some form of a “hybrid cloud” in their environment. And the most popular Public cloud providers used in tandem with OpenNebula are Amazon Web Services (AWS) at 39% and Microsoft Azure at 22%.  These two were most popular in our last survey, as well, but have increased slightly, where they were 30% and 16% respectively.

Public Clouds used for Hybrid (allows for multiple selections)


Shared datastores at 53% and Ceph at 40% remain the most widely used storage solutions in open environments – while Shared datastore usage dropping slightly from 60%, Ceph has remained the same at 40%. VMware FS at 38% is used in VMware-based deployments, mainly through vCenter, and dropped slightly from 40%.

Storage configurations (allows for multiple selections)


The most common network configuration is still the Standard Bridged network configuration (40%). 802.1Q VLAN (31%) and Open vSwitch (25%) remain widely-used choices as well, while there has been some movement to VXLAN and OpenSwitch VXLAN with 14% and 10% respectively.  And for VMware deployments, we see a steady usage of VMware networking (37%), primarily used through vCenter.

Networking configurations (allows for multiple selections)


Authentication practices, while shifting slightly to using external authentication systems, have remained fairly static.  The majority of deployments use the built-in User/Password authentication (68%), and LDAP, and Active Directory have gained slightly more traction since the last survey with 22%, and 21% usage respectively. SSH, while widely used (42%) has seen a slight drop from 50%.

Authentication configurations (allows for multiple selections)


CentOS and Ubuntu are still the most popular Linux distributions for creating OpenNebula clouds with usage at 48% and 41% respectively, which is a slight upward movement from 44% and 40%.  Debian has remained steady at 22% usage.

Operating Systems (allows for multiple selections)


A new question was introduced to understand what types of configuration management systems are employed to take advantage of automation.  The most common platform is Ansible, with 50% usage. Puppet is another popularly used tool, at 29%. There are a few other tools used with some regularity, like Chef (8%) and SaltStack (6%). And 27% of respondents state that they do not use any configuration management system at all.

Configuration Management (allows for multiple selections)


Similar to the types of configuration management systems, we asked which tools are used to deploy OpenNebula, and the same set of tools lined up in a similar fashion. Ansible is the most-commonly used with 40% usage, and Puppet is used by 21%.  Tools like Chef (5%) and SaltStack (4%) have a small user base, while 41% of respondents state that they do not use any deployment tools.

Deployment tools (allows for multiple selections)


Another question asked inquired about Container or PaaS tools being used to manage applications.  While a large part of the respondents (48%) state that they are not yet using these types of tools, for those that are using these types of tools, the most commonly used is Kubernetes with 25% usage. Docker Swarm (14%) and OpenShift (10%) follow in popularity, while 9% of respondents state they are building their own solutions.

Container / PaaS for Application Management (allows for multiple selections)


When asking about which Advanced Components are used or planning to be used, a large selection of components were confirmed.  The most commonly selected was “High Availability” at 65%. Other popular components are “Application Containerization” (47%), “Data Center Federation” (43%), and “OneFlow” (42%).  A few others are listed, while 14% state that they are not using or interested in using any advanced components, just yet.

Advanced Components (allows for multiple selections)


From a provisioning interface perspective, a large majority of OpenNebula users are taking advantage of the Sunstone interface (82% of users). The CLI and API both have a fairly extensive usage across the community with 41% and 39% respectively, and the Cloud View (Self-Service portal) has 27% of usage.

Provisioning Interfaces (allows for multiple selections)


We inquired about which OpenNebula API’s are being used or planned to be used in the near future, and curiously, a very large constituent (45%) has shown interest in being able to soon utilize Python bindings when v.5.8 is released. Of other API’s currently available at the time of the survey, use or intended use of Ruby bindings is common with 26%, while JAVA and Go bindings follow with 16% and 13% respectively.  29% of users state no current interest in using OpenNebula API’s.

OpenNebula API’s (allows for multiple selections)


And lastly, as seems to be the case for many years running, “Stability”, “Flexibility”, and “Openness” continue to be the top reasons why users choose OpenNebula.

Why OpenNebula? (allows for multiple selections)


Thank you again!

Stay connected!


One of the biggest features in the recent OpenNebula 5.8 Edge release is, no doubt, the support for Linux containers (LXD) – which we already covered in our blog.

If you are tempted to give it a try, go ahead, it’s really simple! You can start in AWS with the common Ubuntu 18.04 image and the whole setup from start to finish won’t take you more than a matter of minutes.

The minimal recommended size is perhaps t2.medium.  Just give it at least 25GB disk space and allow access to the 9869 TCP where the WebUI is running.

Then it comes to the simple deployment for which you can download miniONE

wget https://github.com/OpenNebula/minione/releases/download/v5.8.0/minione

grant execution permission to the tool

chmod u+x minione

and deploy the OpenNebula with pre-configured LXD environment just by running

sudo minione --lxd

When it’s done, you can follow the MiniONE guide try-out section to launch your first containers. “miniONE” prepares one image and template for you – Centos7 – KVM, but no worries about the name as it works also for LXD. Also, the virtual network is exactly the same – no differences at all. The scheduler just checks what available hosts (hypervisors) there are and decides what to launch. And as we run miniONE with the –lxd parameter, the LXD host will be configured.

Follow along step-by-step in the following screencast video:

  OpenNebula 5.8 – Install with LXD containers in minutes using miniONE

Feel free to check other images from the OpenNebula Marketplace, or you can also create an additional Marketplace with https://images.linuxcontainers.org/ backend which contains plenty of upstream LXD containers.

Give it a shot, and share your feedback!