Agenda for the Upcoming Cloud TechDay in Ede, NL

Next week, on the 13th of May 2016, will host a new edition of an OpenNebula Cloud TechDay.

This TechDay will feature a 4-hour hands-on tutorial in which you will learn how to install and configure an OpenNebula Cloud from scratch. The presentations in the afternoon will be focused in Ceph. We want you to learn as much as possible about Ceph best-practices and how to use it with your OpenNenbula Cloud.

The agenda for the afternoon is:

  • Object scale out with Eternus CD10000 by Walter Graf and Frits de Kok from Fujitsu.
    Introduction to object storage, Ceph concepts and internals and how Fujitsu managed to overcome administrative challenges involved in running a Ceph cluster.
    Fujitsu and BIT will be giving a demo of Fujitsu CD10000 and OpenNebula.
  • Building the Dutch National Archive with Ceph by Wido den Hollander, Part of the Ceph Board.
    The Dutch National Archive has chosen Ceph to store their data on in Groningen, The Netherlands. Together with ODC Noord I’ve built the 8PB Ceph cluster running in a IPv6-only network. This talk will go in-depth in the design decisions made when building this cluster.
  • The OpenNebula Ceph Drivers by Jaime Melis from OpenNebula Systems.
    Overview of the Ceph Drivers. Configuration attributes, peculiarities. Everything you should know about before deploying your OpenNebula + Ceph cloud.
  • BIT’s experiences playing with Ceph and OpenNebula by Stefan Kooman from
    BIT has been running a Ceph test cluster for some time and will talk about their experiences so far. A live demo is planned where we will test Ceph’s ability to recover from failure.

Join this TechDay to learn about OpenNebula, the Cloud, Ceph and benefit from the expertise of the speakers!






More information
Register for this Event

2015 OpenNebula Cloud Architecture Survey Results

Executive Summary

This is the third survey of OpenNebula deployments since September 2012. The results of this voluntary survey were answered online between December 3, 2015 and December 11, 2015. While previous surveys were open for several months, responses in this survey were collected only over a period of one week because its goal is to have a snapshot on the architectural components of existing OpenNebula clouds in order to improve support for the most demanded infrastructure platforms and configurations.

Although several hundreds of organizations took part of the survey, we have only included in the analysis the 190 respondents that are using OpenNebula 4.x (latest series) and who we deem reliable because they have provided identification details that allow us to verify the answers of the survey. This is important given that our main aim is to have accurate and useful information about OpenNebula deployments. This Survey is not a market survey and does not express all OpenNebula deployments worldwide. Since the foundation of the open-source project in November 2007, OpenNebula has been downloaded more than 360,000 times from the project site (280,000 times since our first survey in September 2012 and 160,000 times since our latest survey in August 2014), not including other software repositories or third-party distributions.

Regarding the use of OpenNebula, the Survey shows 43% of overall deployments are in Industry and 13% in Research Centers. Most of organizations (80%) are in Europe, Russia or North America and use OpenNebula to build private clouds (93%). When asked about the type of workload, 73% said that they use OpenNebula for running production workloads.

Regarding the size of the clouds, 80% of deployments have fewer than 100 nodes and 10% of deployments have more 500 physical nodes. 51% of deployments consist of more than one OpenNebula zone and 5% run more than 10 zones. Among the advanced components offered by OpenNebula, High Availability at 73% is the most widely used, in correspondence with the predominant production usage of OpenNebula.

Regarding the building blocks of the cloud, KVM at 79% and VMware vCenter at 37% are the dominant virtualization platforms, and CentOS at 44% and Ubuntu at 40% are the most widely used linux distributions for OpenNebula clouds. The preferred choices for the storage back-ends are shared FS and Ceph at 60% and 40% respectively. Regarding networking, most of the deployments, 45%, use the Standard Linux Bridge for network configuration, 35% use 802.1Q, and 33% use Open vSwitch. 44% of deployments use the hybrid cloud functionality offered by OpenNebula. Amazon EC2 at 30% and Microsoft Azure at 16% are the most widely used public clouds.

In comparison to previous survey findings in 2014, there have been some changes:

  • OpenNebula shows its increasing maturity, with 73% of deployments in production compared to 62% reported in our previous survey.
  • Growth in North America has accelerated, now representing 30% of responses, up from 20%.
  • KVM provides the majority of OpenNebula support, growing from 48% to 79%.
  • There is a high rate of adoption in VMware environments from 28% to 37%.
  • Ubuntu grows from 36% to 40% and Debian falls from 33% to 22% as operating systems to build the cloud.
  • The use of the EC2 cloud API decreases from 25% to 10%.
  • The use of Ceph has grown considerably from 17% to 40%.
  • The use of LVM as storage solution decreases from 22% to 12%.

On the whole, OpenNebula continues to be loved by its users for its flexibility, 82%, simplicity, 80%, and openness, 72%. These results are aligned with our our mission — to become the simplest cloud enabling platform — and our purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure.

We would like to thank all respondents that took part in the survey!

A. About the Organization

43% of respondents indicated that they work for industry, while 13% work for research centers. These are similar to previous survey results.


Type of Organization


50% of deployments are in Europe and Russia. This means a small reduction compared with previous survey data where the number of deployments in Europe and Russia was 54%. The number of deployments in North America grows from 20% to 30%. 80% of respondents are located in Europe, Russia, and North America.


Geographic Region


65% of organizations are small companies with fewer than 500 employees, and only 7% has more than 10,000 employees.


Number of Employees in the Organization

B. About the Type of Cloud

93% of respondents are running a private cloud for internal operations, while 34% are running a public cloud to offer utility services. Compared with previous survey data in 2014, the number of public clouds decreases from 40% and the number of private deployments increases from 84%.


Type of Cloud (people may select more one checkbox)


73% of respondents are running non-critical environments or peripheral installations for running testing or development applications, while 73% are using the cloud for running production workloads. We see that OpenNebula is increasingly mature, with more deployments moving into production stage as compared with prior surveys data, from 42% in 2012 and 62% in 2014.


Type of Workload (people may select more one checkbox)


The number of users in most of the clouds, 70%, is fewer than 100. Many of these deployments use OpenNebula as virtual data center infrastructure manager and not as a cloud provisioning platform. Similar results were collected in previous edition of the survey.


Number of Users

C. About the Cloud Architecture

56% of OpenNebula deployments have more than 10 nodes, and 10% of the deployments have more than 500 physical nodes. Similar results were collected in the previous edition of the survey.


Number of Nodes


51% of deployments are federated environments consisting of more than one OpenNebula zone, and 5% are running more than 10 zones.This means a slight increase, from 44%, in the number of federated environments compared to previous survey.


Number of Zones


KVM at 79% and vCenter at 37% are the most widely used virtualization platforms. Next one is Xen at 12%. VMware ESX drivers are used by only 4% of deployments, most of VMware users have migrated from ESX to vCenter drivers, which brings many benefits. The number of KVM users has grown considerably from 48% to 79% and the number of VMware users has grown from 28% to 37%, compared to previous survey in 2014 (vCenter support was introduced just after the previous survey). Other hypervisors include those not part of the main OpenNebula distribution that are supported through community plugins.


Hypervisor (people may select more one checkbox)


Shared file system at 60% and Ceph at 40% are the most widely used storage solutions in open environments. The use of Ceph has grown considerably from 17% in 2014. FS LVM, Block LVM and GlusterFS are used by 17%, 12% and 12% of organizations respectively. VMware FS at 40% is used in VMware-based deployments, mainly through vCenter.


Storage Configuration (people may select more one checkbox)


Most of deployments, 45%, use the Standard Linux Bridge for network configuration; 35% use 802.1Q; 33% use Open vSwitch; and 13% use VXLAN. VMware networking at 40% is used in VMware-based deployments, mainly through vCenter. These are similar to previous survey results.


Network Configuration (people may select more one checkbox)


Regarding authentication, most of organizations, 73%, use the built-in user/password system, while SSH and LDAP/AD, with 50% and 36% respectively, are the more popular external authentication systems. Similar results to previous edition of the survey.


Authentication Configuration (people may select more one checkbox)


CentOS at 44% and Ubuntu at 40% are the most widely used linux distributions for building OpenNebula clouds. CentOS slightly falls from 46% and Ubuntu grows from 36%. Debian decreases from 33% to 22% of the deployments.


Operating System (people may select more one checkbox)


Among the advanced features offered by OpenNebula, High Availability, with 73%, is the most widely used. DC federation and Flow multi-VM are the next features with 55% and 45% respectively. The use of the EC2 cloud API drops from 25% to 10%.


Advanced Components (people may select more one checkbox)


In this survey edition we added a new question about the use of hybrid cloud drivers. 44% of deployments use the hybrid cloud functionality offered by OpenNebula. Amazon EC2 at 30% and Microsoft Azure at 16% are the most widely used public clouds.


Hybrid Cloud (people may select more one checkbox)

D. Why OpenNebula

One more year, simplicity, flexibility, and openness continue being the main reasons for choosing OpenNebula.


Why OpenNebula (people may select more than one checkbox)

Automated Oversubscription and Dynamic Memory Elasticity for OpenNebula

In Cloud Management Platforms users typically deploy their VMs out of pre-defined templates that can specify a fixed amount of memory. Users may be able to customize the size of their VMs but they tend to overestimate the memory required for their applications. As an example, three 4 GB VMs with applications actually using 1 GB of RAM fit in a 12 GB node, but there is no more room left for additional VMs. Adjusting their memory to 1 GB VMs enables to deploy additional VMs on that node, as shown in the next figure.


CloudVAMP is an open-source development that manages these situations for all the nodes in an OpenNebula Cloud deployment based on the KVM hypervisor. CloudVAMP monitors the memory usage of Virtual Machines (VMs) and dynamically changes the memory allocated to VMs by stealing the unused free memory from VMs. Then CloudVAMP enables OpenNebula to use that stolen memory, thus being able to increase the VM-per-node ratio. To prevent memory overload in the physical hosts, live migration is applied in order to accommodate the increasing memory demand by VMs across the OpenNebula Cloud.

Some Technical Details

CloudVAMP consists of three components:

  • Cloud Vertical Elasticity Manager (CVEM). An agent that analyzes the amount of memory actually needed by the VMs and dynamically updates the memory allocated to each of them, according to a set of customizable rules.
  • The Memory Reporter (MR). An agent that runs in the VMs and reports to the OpenNebula monitoring system the free, used memory and usage of the swap space, by the applications in the VM.
  • The Memory Oversubscription Granter (MOG). A system that informs OpenNebula about the amount of memory that can be oversubscribed from the hosts, to be taken into account by the OpenNebula scheduler.

CloudVAMP integrates with OpenNebula in several ways. The MR can be staged in the VMs using the contextualization mechanisms provided by OpenNebula or it can be pre-installed in the Virtual Machine Images. It contacts OneGate to report the memory usage in the VMs. The CVEM is installed as a daemon in the front-end node of the OpenNebula Cloud. Finally, the MOG is implemented as a new Information Manager in OpenNebula. The interaction with KVM is performed by means of LibVirt. Therefore, no modifications in the OpenNebula worker nodes are required. The interaction with the components is shown in the following figure:


Benefits of Using CloudVAMP

Deploying CloudVAMP in an OpenNebula Cloud enables to seamlessly allow OpenNebula to deploy more VMs per physical host, thus achieving increased server density. The memory usage of VMs is monitored in order to satisfy increased memory demands by the applications running in the VMs. The usage of live migration to redistribute the VMs without downtime is employed if necessary, without any user or sysadmin intervention. This enables an increased usage of the hardware platform that supports an OpenNebula Cloud.

In particular, at the GRyCAP research group we have integrated CloudVAMP in order to accommodate a larger number of incoming jobs from the ES-NGI (the Spanish National Grid Initiative) that are executed on a virtual elastic cluster deployed and managed by EC3 (Elastic Cloud Computing Cluster). The virtual cluster, deployed on top of our OpenNebula Cloud, is horizontally scaled whenever incoming jobs are received (i.e., deploying additional Worker Nodes (WNs)) and vertically scaled (i.e., adjusting the allocated memory to the VMs) in order to let OpenNebula deploy additional WNs in the same host, if necessary. Further details of this case study are available in CloudVAMP’s reference publication.


CloudVAMP has been developed by the GRyCAP research group at the Universitat Politècnica de València. It is available under the Apache 2.0 license at GitHub.

There is further information in CloudVAMP’s web page and in the corresponding publication:

Germán Moltó, Miguel Caballer, and Carlos de Alfonso. 2016. “Automatic Memory-Based Vertical Elasticity and Oversubscription on Cloud Platforms.” Future Generation Computer Systems 56: 1–10.

Contributions, feedback and issues are very much welcome.

OpenNebula Docker Driver and Datastore

ONEDock is a set of extensions for OpenNebula to use Docker containers as first-class entities, just as if they were lightweight Virtual Machines (VM). For that, Docker is configured to act as an hypervisor so that it behaves just as KVM or other hypervisors do in the context of OpenNebula.

The underlying idea is that when OpenNebula is asked for a VM, a Docker container will be deployed instead. In the context of OpenNebula, it is managed as if it was a VM, and the user will be able to use IP addresses to access to the container.

Docker Machine and similar projects deploy VMs in different Cloud Management Plattforms (e.g. OpenNebula, OpenStack) or commercial providers (like Amazon EC2), installing Docker on them. Afterwards, it is possible to deploy and manage Docker containers inside them, using the Docker client tools that communicate directly with the Docker services deployed inside the aforementioned VMs.

Instead, ONEDock takes a different approach by deploying Docker containers on top of bare-metal nodes, thus considering the containers as first-class citizens in OpenNebula. This allows to seamlessly integrate the benefits of Docker containers (quick deployment, limited overhead, availability of Docker images, etc.) in a Cloud Management Platform such as OpenNebula. On the other side, it provides new features for containers that are usually reserved for VMs (e.g. enhanced IP addressing, attachment of block devices, etc.).

ONEDock tries to adapt Docker semantics to the OpenNebula context. The workflow for a whole use-case is the following:

  1. An image is registered in a datastore of type ‘onedock’, by using the oneimage command.
  2. ONEDock will download the image from Docker Hub.
  3. A VM that uses an image registered in the ‘onedock’ datastore is requested.
  4. When the VM is scheduled, ONEDock will actually create a Docker container instead of the VM, and the container will be daemonized (e.g. kept alive).
  5. If the container has been connected to a network, it is possible to be accessed (e.g. using ssh or http).

The most prominent feature of ONEDock is that it does not introduce any API changes, therefore it does not modify the way of interacting with OpenNebula: It is possible to use the ONE CLI (i.e. oneimage, onevm, onetemplate), OpenNebula Sunstone, XML-RPC, etc. and keep the usual lifecycle for the VMs.

Technical details

ONEDock provides 4 components that need to be integrated into the OpenNebula deployment:

  • ONEDock Datastore, that enables to create a datastore that contains Docker images. It is self-managed in the sense that images are created as references that are automatically downloaded from Docker Hub.
  • ONEDock Transfer Manager, that stages the docker images that are in a Docker datastore into the virtualization hosts.
  • ONEDock Monitoring Driver, that monitors the virtualization hosts in the context of the Docker hypervisor.
  • ONEDock Virtual Machine Manager, that carries out the tasks related to the lifecycle of the Docker containers as if they were VMs.

These components have to be installed in the proper folders of a ONE frontend (i.e. /var/lib/remotes/) and activated in the oned.conf file. Therefore, no source code modifications of OpenNebula are required.

Once this has been done, it is possible to create datastores of type ‘onedock’ and virtualization hosts that use ‘onedock’ as the virtual machine manager.

The ONEDock datastore

In order to deploy a Docker container, a Docker image is required. When you run a container, Docker automatically retrieves the image from the Docker Hub repository.

To avoid that all virtualization hosts access Docker Hub, and as kind of cache, ONEDock supports a private registry installed in the OpenNebula front-end. Then, the references to the Docker images will point to the private Docker registry. ONEDock supports Docker registry v2.0.

The network in ONEDock

Docker containers are conceived to run applications, and so, it is common to find that ports are redirected to public ports in the machine that hosts the Docker container. ONEDock enhances this behaviour in order to expose all the ports of the container as it would happen in a VM. Therefore, you can run different services in different ports without the need of exposing them explicitly. The container will have an IP address where all the ports are available.

Testing ONEDock

ONEDock can be evaluated in a sandbox before deploying it in your on-premises Cloud. Easy as 1, 2, 3:

  1. Install Vagrant
  2. Spin up the vagrant VM, which will be automatically configured with ONE, ONEDock, the docker registry and all the needed components.
  3. Start creating Docker containers with the common ONE commands (i.e. onevm create, etc.)


  1. Install LXC
  2. Create a testing container (using a self contained cli-utility that installs ONE, ONEDock, the docker registry and all the needed components).
  3. Start using ONE by issuing the common ONE commands (i.e. onevm create, etc.)

Getting ONEDock

ONEDock has been developed in the framework of the INDIGO-DataCloud ( project under the Apache 2.0 license. You can get it from the public repository

ONEDock is accepting contributions. You are invited to interact with us in the GitHub repository, by asking questions or opening new issues.

OpenNebula is 8 Years Old!

After the release of OpenNebula 4.14.2, we are incredibly proud to be able to celebrate the 8th birthday of the OpenNebula Project. Our open source cloud management platform celebrates its eight birthday and looks to the release of major version 5.0 in 2016.

Eight years ago, we announced the open-source community project. Since then, we’ve seen our user base grow from a handful to one of the most vibrant user communities around. We can’t tell you how much it means to us that so many people have invested time to help us improve OpenNebula – every issue filed, every bug solved, every requested and every question submitted was a gift freely given by each and every one of you to each other, and we are grateful for it. OpenNebula would not be OpenNebula without you.

We are excited to see what the next year will bring.

On behalf of the OpenNebula Team

Last Opportunity to Contribute to OpenNebulaConf: Lightning Talks Still Available!

As you may know, the lineup and agenda for the second OpenNebula Conference (due this 20-22 October in Barcelona) are already closed. These high quality contents ensure that the conference is the perfect place to learn about Cloud Computing, and to understand how industry leaders of different sectors are using OpenNebula in their datacenters.

There is however still a chance to contribute to the conference. The lightning talks are 5 minute plenary presentations focusing on one key point. This can be a new project, product, feature, integration, experience, use case, collaboration invitation, quick tip, or demonstration. This session is an opportunity for ideas to get the attention they deserve. Remember the rules:

  • Five minutes and only five minutes
  • Three slides and only three slides
  • Focus on only one key point: use case, experience, new feature, demo…

We have two 30-minute sessions for lightning talks and there are still slots available, so now is the time to send us your proposal!

OpenNebula – Securing Sunstone’s NoVNC connections with Secure Websocket and your own Certificate Authority

When dealing with NoVNC connections, I’ve faced some problems as a newbie, so today I’m sharing with you this post that may help you.

If you’re already using SSL to secure Sunstone’s access you could get an error when opening a VNC window: VNC Connection in progress”It’s quite possible that your browser is silently blocking the VNC connection using websockets. Reason? You’re using an https connection with Sunstone, but you’re trying to open an uncrypted websocket connection.


This is solved easily, just edit the following lines in the # UI Settings section in your /etc/one/sunstone-server.conf configuration file:

:vnc_proxy_support_wss: yes
:vnc_proxy_cert: /etc/one/certs/one-tornasol.crt
:vnc_proxy_key: /etc/one/certs/one-tornasol.key

We’ve just activated the secure websockets (wss) options and tell Sunstone where to find the SSL certificate and the key (if it’s not already included in the cert). Now, just restart your Sunstone server.


There’s another issue with VNC and SSL when using self-signed certificates. When running your own lab or using a development environment maybe you don’t have an SSL certificate signed by a real CA and you opt to use self-signed certificates which are quick and free to use… but this has some drawbacks

Trying to protect you from security threats, your Internet browser could have problems with secure websockets and self-signed certificates and messages like “VNC Disconnect timeout” and VNC Server disconnected (code: 1006)” could show.


In my labs I just use the openssl command (available in CentOS/Redhat and Debian/Ubuntu in the openssl package) to generate my own Certificate Authority certificate and sign the SSL certificates.

First we’ll create the /etc/one/certs directory in my Frontend and set the right owner:

mkdir -p /etc/one/certs
chown -R oneadmin:oneadmin /etc/one/certs

We’ll generate an RSA key with 2048 bits for the CA:

openssl genrsa -out /etc/one/certs/oneCA.key 2048

Now, we’ll produce the CA certificate using the key we’ve just created, and we’ll have to answer some questions to identify our CA (e.g my CA will be named ArtemIT Labs CA). Note that this CA certificate will be valid for 3650 days, 10 years!…

openssl req -x509 -new -nodes -key /etc/one/certs/oneCA.key -days 3650 -out /etc/one/certs/oneCA.pem

You are about to be asked to enter information that will be incorporated into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:ES
State or Province Name (full name) []:Valladolid
Locality Name (eg, city) [Default City]:Valladolid
Organization Name (eg, company) [Default Company Ltd]:ArtemIT Labs
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:ArtemIT Labs CA
Email Address []:

Now, we already have a CA certificate and a key to sign SSL certificates. Time to generate the SSL certificate for WSS connections.

First, we’ll create the key for the Frontend, then we’ll generate the certificate answering some questions. In this example my Frontend server is called tornasol.artemit.local and I’ve set no challenge password for the certificate.

openssl genrsa -out /etc/one/certs/one-tornasol.key 2048

openssl req -new -key /etc/one/certs/one-tornasol.key -days 3650 -out /etc/one/certs/one-tornasol.csr

You are about to be asked to enter information that will be incorporated into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:ES
State or Province Name (full name) []:Valladolid
Locality Name (eg, city) [Default City]:Valladolid
Organization Name (eg, company) [Default Company Ltd]:ArtemIT Labs
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:tornasol.artemit.local
Email Address []:
Please enter the following 'extra' attributes to be sent with your certificate request
A challenge password []:
An optional company name []:

If everything is fine you’ll have the certs and keys under /etc/one/certs.

Now we’ll copy the oneCA.pem file to the computers where I’ll use my browser to open the Sunstone GUI.

In Firefox we’ll import the oneCA.pem (the CA certificate file) using Preferences -> Advanced -> Certificates -> Authorities tab checking all the options as shown in this image. If using Chrome under Linux it’s the same process when importing your CA cert.


If using IE or Chrome under Windows, change the extension from pem to crt, double-click the certificate and add the Certificate to the Trusted Root Certification Authorities storage. Some warnings will show, just accept them.

Once we trust our CA certificate, you can open your encrypted NoVNC windows.

Captura de pantalla de 2015-04-25 15:06:08

Free, quick and secure for your lab environment, but remember don’t do this in a production environment! 


Barcelona Opennebula User Group


As you know, the community of OpenNebula is an important pillar for the project. Opennebula community through the distribution lists and forums can express their questions, requests, or contribute with new ideas to the developers. This information is very useful and can contribute by helping other users or develop new features.

However, OpenNebula project thought in User Groups too. The OpenNebula User Groups are local communities, where users can discuss or share information and experiences in a more direct way across ‘town’. Getting a closer diffusion, and joining people who want to collaborate with the project.

Also, remember that this year (2015) the Opennebula annual conference travels from Berlin to Barcelona, ​​the ‘smartcity’ that will be the meeting point where developers, users, administrators, researchers, … can share experiences, case studies, etc.


For these reasons, some cloudadmins of Barcelona area have decided to create the Barcelona OpenNebula User Group. This group aims to be a small-scale community where we can discuss and find common objectives that support the project. We have created a website and a Google group where we will inform about first steps and work together in common goals.

In addition, and inside ONEBCN usergroup official presentations tour we will be next 5th of May on sudoers, a sysadmins group that meets regularly at the North Campus of the UPC.

It is a totally open group, so you are welcome!  First members of the Group:

Oriol Martí gabriel-verdejo-380x303 Angel Galindo Muñoz Xavier Peralta Ramos Jordi Guijarro Juan José Fuentes Miguel Ángel Flores Alex Vaqué

Some interesting links:

Cloudadmins Community Blog –

OneBCN Google Group –

Sudoers Barcelona –

Videos from OpenNebulaConf 2014

Last week we celebrated the OpenNebulaConf 2014, an event where the community comes together to share their experiences and new ideas around OpenNebula. If you were there, go ahead and take a look at the photos in the conference page to check if we caught a flattering pic of you.

The OpenNebulaConf 2014 was a great event, and certainly our speakers deserve most of the credit for it. Thank you for sharing your expertise!

If you missed the conference, now you have a chance to listen to the talks in our YouTube channel, and download the slides from the slideshare account. Enjoy.


Technical Notes from OpenNebulaConf 2014

One of the best things about getting together for the conference is that our community always comes with plenty of new ideas and useful feedback to shape the project’s roadmap.

This year’s OpenNebulaConf was full of interesting talks with lots of thoughtful feedback, but we also had many productive discussions in the hacking session, the coffee breaks, and the evening get-togethers.

In this post we will try to summarize the main requests we gathered during the OpenNebulaConf. Feel free to join the discussion in the development portal or in the mailing lists.

And remember, you are always welcome to add new tickets, don’t be shy! We appreciate it when you open new requests, it’s always better to develop with real needs and use cases in mind.

Finally, I would like to take this opportunity to thank all of you for showing up in Berlin and making the conference awesome. See you next year!


Resource Management

New Integrations

Quotas & Accounting


Authentication & Authorization