Upcoming Cloud TechDays Cambridge USA and Toronto CA

Last week we opened the call for speakers and registration to the OpenNebula Cloud TechDays in Kuala Lumpur (Malaysia), Sofia (Bulgaria), Dallas (TX, USA), Ede (The Netherlands) and Nuremberg (Germany). Today we are announcing that we just opened the call for speakers and registration to the following Cloud TechDays:

Send us an email at events@opennebula.org if you are interested in speaking at one of the TechDays and register as soon as possible if you are interested in participating, seats are limited!.

For more information on past events, please visit the Cloud Technology Days page

Please send us an email at events@opennebula.org if you are interested in hosting a TechDays event.

We look forward to your answers!

Upcoming Cloud TechDays in 2016

Besides our annual OpenNebula Conference in Barcelona (very early bird registration is now open), we are planning to organize Technology Day events in multiple cities globally during 2016. We have just published complete info about the following TechDays:

And in a few days we will publish the details of the TechDays we are organizing in:

  • Dublin, Ireland
  • Toronto, Canada
  • Cambridge, MA, USA
  • Madrid, Spain
  • San Francisco, CA, USA

TechDays

 

The OpenNebula TechDays are full day events to learn about OpenNebula with a hands-on cloud installation and operation workshop, and presentations from community members and users that will focus on:

  • Sharing cloud use cases and deployment experiences
  • Introducing new integrations and ecosystem developments
  • Describing other related cloud open-source projects and tools

Send us an email at events@opennebula.org if you are interested in speaking at one of the TechDays and register as soon as possible if you are interested in participating, seats are limited!.

For more information on past events, please visit the Cloud Technology Days page

Please send us an email at events@opennebula.org if you are interested in hosting a TechDays event.

We look forward to your answers!

OpenNebula Public Training for 2016

OpenNebula Systems has published the schedule for public classes in 2016 at its offices in Madrid, Spain, and Cambridge, MA, USA. This year OpenNebula Systems is expanding its public training services to the United States.

The course about OpenNebula Fundamentals: Cloud Operator and Architect uses a hands-on lab atmosphere to provide IT professionals with the skill-sets they need to install, configure and operate OpenNebula deployments. Additionally the program briefly addresses the integration of OpenNebula with other components in the data center.

You can contact OpenNebula Systems if your would like to request public training near you or private on-site training.

Automated Oversubscription and Dynamic Memory Elasticity for OpenNebula

In Cloud Management Platforms users typically deploy their VMs out of pre-defined templates that can specify a fixed amount of memory. Users may be able to customize the size of their VMs but they tend to overestimate the memory required for their applications. As an example, three 4 GB VMs with applications actually using 1 GB of RAM fit in a 12 GB node, but there is no more room left for additional VMs. Adjusting their memory to 1 GB VMs enables to deploy additional VMs on that node, as shown in the next figure.

CloudVamp

CloudVAMP is an open-source development that manages these situations for all the nodes in an OpenNebula Cloud deployment based on the KVM hypervisor. CloudVAMP monitors the memory usage of Virtual Machines (VMs) and dynamically changes the memory allocated to VMs by stealing the unused free memory from VMs. Then CloudVAMP enables OpenNebula to use that stolen memory, thus being able to increase the VM-per-node ratio. To prevent memory overload in the physical hosts, live migration is applied in order to accommodate the increasing memory demand by VMs across the OpenNebula Cloud.

Some Technical Details

CloudVAMP consists of three components:

  • Cloud Vertical Elasticity Manager (CVEM). An agent that analyzes the amount of memory actually needed by the VMs and dynamically updates the memory allocated to each of them, according to a set of customizable rules.
  • The Memory Reporter (MR). An agent that runs in the VMs and reports to the OpenNebula monitoring system the free, used memory and usage of the swap space, by the applications in the VM.
  • The Memory Oversubscription Granter (MOG). A system that informs OpenNebula about the amount of memory that can be oversubscribed from the hosts, to be taken into account by the OpenNebula scheduler.

CloudVAMP integrates with OpenNebula in several ways. The MR can be staged in the VMs using the contextualization mechanisms provided by OpenNebula or it can be pre-installed in the Virtual Machine Images. It contacts OneGate to report the memory usage in the VMs. The CVEM is installed as a daemon in the front-end node of the OpenNebula Cloud. Finally, the MOG is implemented as a new Information Manager in OpenNebula. The interaction with KVM is performed by means of LibVirt. Therefore, no modifications in the OpenNebula worker nodes are required. The interaction with the components is shown in the following figure:

cloudvamp-arch

Benefits of Using CloudVAMP

Deploying CloudVAMP in an OpenNebula Cloud enables to seamlessly allow OpenNebula to deploy more VMs per physical host, thus achieving increased server density. The memory usage of VMs is monitored in order to satisfy increased memory demands by the applications running in the VMs. The usage of live migration to redistribute the VMs without downtime is employed if necessary, without any user or sysadmin intervention. This enables an increased usage of the hardware platform that supports an OpenNebula Cloud.

In particular, at the GRyCAP research group we have integrated CloudVAMP in order to accommodate a larger number of incoming jobs from the ES-NGI (the Spanish National Grid Initiative) that are executed on a virtual elastic cluster deployed and managed by EC3 (Elastic Cloud Computing Cluster). The virtual cluster, deployed on top of our OpenNebula Cloud, is horizontally scaled whenever incoming jobs are received (i.e., deploying additional Worker Nodes (WNs)) and vertically scaled (i.e., adjusting the allocated memory to the VMs) in order to let OpenNebula deploy additional WNs in the same host, if necessary. Further details of this case study are available in CloudVAMP’s reference publication.

Availability

CloudVAMP has been developed by the GRyCAP research group at the Universitat Politècnica de València. It is available under the Apache 2.0 license at GitHub.

There is further information in CloudVAMP’s web page and in the corresponding publication:

Germán Moltó, Miguel Caballer, and Carlos de Alfonso. 2016. “Automatic Memory-Based Vertical Elasticity and Oversubscription on Cloud Platforms.” Future Generation Computer Systems 56: 1–10. http://linkinghub.elsevier.com/retrieve/pii/S0167739X15003155.

Contributions, feedback and issues are very much welcome.

OpenNebula TechDays 2016 – Call for Hosts

Besides our annual OpenNebula Conference, we are planning to organize Technology Day events in multiple cities globally during 2016.

The OpenNebula TechDays are full day events to learn about OpenNebula with a hands-on cloud installation and operation workshop, and presentations from community members and users that will focus on:

  • Sharing cloud use cases and deployment experiences
  • Introducing new integrations and ecosystem developments
  • Describing other related cloud open-source projects and tools

In the shorter term we would like to organize TechDays in USA (East and West coasts) and Europe, like we did during 2015:

For more information on past events, please visit the Cloud Technology Days page. These are not for profit events, all funds raised are rolled into the OpenNebula promo fund and will be used for further OpenNebula TechDay Events.

We look forward to your answers

OpenNebula Docker Driver and Datastore

ONEDock is a set of extensions for OpenNebula to use Docker containers as first-class entities, just as if they were lightweight Virtual Machines (VM). For that, Docker is configured to act as an hypervisor so that it behaves just as KVM or other hypervisors do in the context of OpenNebula.

The underlying idea is that when OpenNebula is asked for a VM, a Docker container will be deployed instead. In the context of OpenNebula, it is managed as if it was a VM, and the user will be able to use IP addresses to access to the container.

Docker Machine and similar projects deploy VMs in different Cloud Management Plattforms (e.g. OpenNebula, OpenStack) or commercial providers (like Amazon EC2), installing Docker on them. Afterwards, it is possible to deploy and manage Docker containers inside them, using the Docker client tools that communicate directly with the Docker services deployed inside the aforementioned VMs.

Instead, ONEDock takes a different approach by deploying Docker containers on top of bare-metal nodes, thus considering the containers as first-class citizens in OpenNebula. This allows to seamlessly integrate the benefits of Docker containers (quick deployment, limited overhead, availability of Docker images, etc.) in a Cloud Management Platform such as OpenNebula. On the other side, it provides new features for containers that are usually reserved for VMs (e.g. enhanced IP addressing, attachment of block devices, etc.).

ONEDock tries to adapt Docker semantics to the OpenNebula context. The workflow for a whole use-case is the following:

  1. An image is registered in a datastore of type ‘onedock’, by using the oneimage command.
  2. ONEDock will download the image from Docker Hub.
  3. A VM that uses an image registered in the ‘onedock’ datastore is requested.
  4. When the VM is scheduled, ONEDock will actually create a Docker container instead of the VM, and the container will be daemonized (e.g. kept alive).
  5. If the container has been connected to a network, it is possible to be accessed (e.g. using ssh or http).

The most prominent feature of ONEDock is that it does not introduce any API changes, therefore it does not modify the way of interacting with OpenNebula: It is possible to use the ONE CLI (i.e. oneimage, onevm, onetemplate), OpenNebula Sunstone, XML-RPC, etc. and keep the usual lifecycle for the VMs.

Technical details

ONEDock provides 4 components that need to be integrated into the OpenNebula deployment:

  • ONEDock Datastore, that enables to create a datastore that contains Docker images. It is self-managed in the sense that images are created as references that are automatically downloaded from Docker Hub.
  • ONEDock Transfer Manager, that stages the docker images that are in a Docker datastore into the virtualization hosts.
  • ONEDock Monitoring Driver, that monitors the virtualization hosts in the context of the Docker hypervisor.
  • ONEDock Virtual Machine Manager, that carries out the tasks related to the lifecycle of the Docker containers as if they were VMs.

These components have to be installed in the proper folders of a ONE frontend (i.e. /var/lib/remotes/) and activated in the oned.conf file. Therefore, no source code modifications of OpenNebula are required.

Once this has been done, it is possible to create datastores of type ‘onedock’ and virtualization hosts that use ‘onedock’ as the virtual machine manager.

The ONEDock datastore

In order to deploy a Docker container, a Docker image is required. When you run a container, Docker automatically retrieves the image from the Docker Hub repository.

To avoid that all virtualization hosts access Docker Hub, and as kind of cache, ONEDock supports a private registry installed in the OpenNebula front-end. Then, the references to the Docker images will point to the private Docker registry. ONEDock supports Docker registry v2.0.

The network in ONEDock

Docker containers are conceived to run applications, and so, it is common to find that ports are redirected to public ports in the machine that hosts the Docker container. ONEDock enhances this behaviour in order to expose all the ports of the container as it would happen in a VM. Therefore, you can run different services in different ports without the need of exposing them explicitly. The container will have an IP address where all the ports are available.

Testing ONEDock

ONEDock can be evaluated in a sandbox before deploying it in your on-premises Cloud. Easy as 1, 2, 3:

  1. Install Vagrant
  2. Spin up the vagrant VM, which will be automatically configured with ONE, ONEDock, the docker registry and all the needed components.
  3. Start creating Docker containers with the common ONE commands (i.e. onevm create, etc.)

Or

  1. Install LXC
  2. Create a testing container (using a self contained cli-utility that installs ONE, ONEDock, the docker registry and all the needed components).
  3. Start using ONE by issuing the common ONE commands (i.e. onevm create, etc.)

Getting ONEDock

ONEDock has been developed in the framework of the INDIGO-DataCloud (https://www.indigo-datacloud.eu) project under the Apache 2.0 license. You can get it from the public repository https://github.com/indigo-dc/onedock.

ONEDock is accepting contributions. You are invited to interact with us in the GitHub repository, by asking questions or opening new issues.

OpenNebula 4.14 ‘Great A’Tuin’ Beta2 is Out!

The OpenNebula project is proud to announce the release of OpenNebula 4.14 ‘Great A’Tuin’ Beta2. This Beta release introduces features not present in Beta1, like like for instance better support for Qcow2 and GPU support for VMs.

To support HPC oriented infrastructures based on OpenNebula, 4.14 also enables the consumption of raw GPU devices existing on a physical host from a Virtual Machine. There is no overcommitment possible nor sharing of GPU devices among different Virtual Machines, so a new type of consumable has been defined in OpenNebula and taken into account by the scheduler. VMs can now request a GPU, and if OpenNebula finds one free resource of type GPU available, it will set up the VM with PCI passthrough access to the GPU resource, enabling applications to get the performance boost of the direct access to a GPU card.

gpupcilist

OpenNebula users managing vCenter infrastructures will also benefit from this upgrade. The workflow of the VM importing feature has been greatly improved through Sunstone, making it easier to import your existing workload into OpenNebula. Moreover, 4.14 adds the possibility to instruct OpenNebula whether or not it should save the disks, protect your users against accidental data lost! Last, but not least, a contextualisation improvement now allows to directly pass scripts to be executed in boot time to vCenter VMs, increasing the flexibility in VM customisation from OpenNebula in vCenter.

There are many other improvements in 4.14 (check the full list of changes in the development portal, or alternatively, the post on 4.14 beta1 release):

  • ceph and qcow2 disk snapshotting
  • image resizing on boot time
  • ability to save VMs into VM Templates for later use
  • better state management of VMs
  • flexible context definition of network attributes
  • ability to import running VMs not launched by OpenNebula from all the supported hypervisors (including the hybrid ones, for instance now it is possible to manage through OpenNebula Azure, SoftLayer and EC2 VMs launched through their respective management portals)
  • the possibility to cold attach disks and network interfaces to powered off machines (which complement the hot attach functionality)
  • improvements in accounting to keep track of disk usage
  • better logging in several areas
  • the ability to pass scripts to VMs for guest OS customization

This OpenNebula release is named after Great A’Tuin,  the Giant Star Turtle (of the fictional species Chelys galactica) who travels through the Discworld universe’s space, carrying four giant elephants who in turn carry the Discworld. Allegedly, it is “the only turtle ever to feature on the Hertzsprung–Russell diagram.”

The OpenNebula team is now set to bug-fixing mode. Note that this is a beta release aimed at testers and developers to try these new features (not production environments) and send a more than welcomed feedback for the final release.

Several organizations have sponsored the project through the Fund a Feature Program:

The OpenNebula project would like to thank these organizations and the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

Relevant Links

OpenNebula at VMworld 2015 in Barcelona

Screen Shot 2015-09-07 at 17.51.13

Next 12-15 of October, just one week before OpenNebulaConf 2015, a major event in the virtualization world will take place in Barcelona, Spain. We are talking about the VMworld 2015 Europe, a must attend event where almost everyone with an interest in cloud computing and virtualization will be networking with industry experts.

The OpenNebula team will be present in the VMworld with a booth dedicated to show demonstrations of OpenNebula and the vCenter drivers, as well as vOneCloud, the open source replacement for VMware vCloud.

If you are going to be around Barcelona next month, make sure you attend the event and come round our booth, NI4040 (near in the entrance, left of the Intel stand). There you will be able to see first hand how a VMware based infrastructure can be managed using your favourite, purely open source technology, OpenNebula.

OpenNebulaConf Early Bird Registration Deadline Approaching

After the summer vacation, it’s a great time to start thinking about the next technical events you want to attend. OpenNebulaConf 2015 is a great opportunity to share experiences and meet people with expertise and interest in OpenNebula. The agenda for the OpenNebulaConf 2015 is available, check out the high quality speakers. The agenda includes two keynote speakers:

As you may already know, OpenNebula Conf 2015 will take place at the cosmopolitan city of Barcelona from October 20th to 22nd and if you are still on time to benefit from the early bird discount.

All Ticket includes:

  • Attendance at all conference presentations (October 21st and 22nd)
  • Attendance at pre-conference tutorials and hacking sessions (October 20th)
  • Coffee break during the morning and afternoon breaks
  • Lunch on both conference days
  • Dinner event on the first conference day
  • Tapas dinner on the pre-conference day
  • WiFi access

We would like to take this opportunity to also thank our Platinum Sponsor PTisp and StorPool; Gold Sponsors ungleich, Xen Server and NodeWeaver; and Silver Sponsors Runtastic and No Limit Network.

bcn_conference

OpenNebulaConf Ticket (Early Bird – 20% discount)

OpenNebula 4.14 ‘Great A’Tuin’ Beta 1 released!

The OpenNebula project is proud to announce the availability of OpenNebula 4.14 ‘Great A’Tuin’ Beta1. This release ships with several improvements in different subsystems and components. The Sunstone interface has been completely refactored, for maintenance and performance reasons. Expect major improvements in Sunstone from now on. Also, we are sure you will like the subtle changes in the look and feel.

sunsdash414

 

Several major features have been introduced in Great A’Tuin. One of the most interesting for cloud users and administrators is the ability to create and maintain a tree of VM disks snapshots. Now VM disks can be reverted to a previous state at any given time, and they are preserved in the image if it is persistent in the image datastore. For instance, you can attach a disk to a VM, create a snapshot, detach it and attach it to a new VM, and revert to a previous state. Very handy, for instance, to keep a working history of datablocks that can contain dockerized applications.

 

snaptree414

Also, in 4.14 snapshots are taken into account for quotas, accounting and showback, so cloud admins can keep track of disk usage in their infrastructure.

The ability to save VMs into VM Templates for later use is another feature that must be highlighted in this release. This new operation is accessible both from the cloud view and the admin Sunstone view. Of course, also from the command line interface.

One great improvement for cloud admins is a much better state management of VMs. It is now possible to recover VMs from failed state instructing OpenNebula to take the last action as success, to retry it or to make it fail gracefully, to recover for instance from failed migrations.

There are many other improvements in 4.14 (check the full list of changes in the development portal):

  • flexible context definition of network attributes
  • ability to import running VMs not launched by OpenNebula from all the supported hypervisors (including the hybrid ones, for instance now it is possible to manage through OpenNebula Azure, SoftLayer and EC2 VMs launched through their respective management portals)
  • the possibility to cold attach disks and network interfaces to powered off machines (which complement the hot attach functionality)
  • improvements in accounting to keep track of disk usage
  • better logging in several areas
  • the ability to pass scripts to VMs for guest OS customization

Overall, a great effort was put in this release to help build and maintain robust private, hybrid and public clouds with OpenNebula.

This OpenNebula release is named after Great A’Tuin,  the Giant Star Turtle (of the fictional species Chelys galactica) who travels through the Discworld universe’s space, carrying four giant elephants who in turn carry the Discworld. Allegedly, it is “the only turtle ever to feature on the Hertzsprung–Russell diagram.”

The OpenNebula team is now set to bug-fixing mode. Note that this is a beta release aimed at testers and developers to try these new features (not production environments) and send a more than welcomed feedback for the final release. There are a number of very interesting features that will make their appearance in the final release but not present in the Beta1, like for instance better support for Qcow2 for live snapshotting and GPU support for VMs.

Disk snapshots with Ceph backend was funded by Unity in the context of the Fund a Feature Program. Qcow2 snapshots implementation was funded by BIT.nl in the context of the Fund a Feature Program. GPU devices support was funded by SURFsara in the context of the Fund a Feature Program. Flexible network attributes definition in contextualization was funded by Université Catholique de Louvain in the context of the Fund a Feature Program.

Relevant Links