LXDoNe – Lightweight Virtualization for OpenNebula

Operating system (OS) level virtualization is a technology that has recently emerged into the cloud services paradigm. It has the advantage of providing better performance, elasticity and scalability than para-virtualization or full virtualization. This happens because HVM hypervisors need to emulate hardware and use a new kernel for each virtual machine that will be deployed. OS level virtualization follows a completely different approach. This technology allows you to share the kernel with the virtual portion of the system, in other words, the kernel is the same for the host and the “virtual machines” running on top of it. A container is a virtual environment, with its own process and network space that makes use of Linux kernel Control Groups and Namespaces to provide hardware isolation. Containers have their own view of the OS, process ID space, file system structure and network’s interfaces. Since they use kernel features, and there’s no emulation of hardware at all, the impact on performance is minimal.

hyp2

LXD is not just an Operating system level virtualization technology, it’s a hypervisor technology for containers. This means LXD containers look and act like virtual machines, but have the lightweight performance and scalability of process containers [1]. LXD has proven dramatic improvements over HVM hypervisors such as KVM [2] and ESX [3] in aspects such as density, speed and latency. 

LXDoNe-logo

LXDoNe is an add-on that allows OpenNebula  to manage LXD containers. It contains a virtualization and monitoring drivers. Right now it’s deployed in Universidad Tecnológica de La Habana José Antonio Echeverría‘s Data Center, alongside with KVM for special cases that implies a different kernel. We are actively working on adding more features and any reported bug will be prioritized by our team.

Features and Limitations.

Right now, the driver has the following features:

  • Life cycle control: deploy, shutdown, restart, reset, suspend and resume containers.
  • Support for Direct Attached Storage (DAS) filesystems such as ext4 and btrfs.
  • Support for Storage Area Networks (SAN) implemented with Ceph.
  • Monitor hosts and containers.
  • Limit container’s resources usage: RAM and CPU.
  • Support for VNC sessions.
  • Deploy containers with several disks and Network Interface Cards (NICs)
  • Support for dummy and VLAN network drivers.
  • Full support for OpenNebula’s contextualization in LXD containers (using special LXD images that will be uploaded to the market).

Features we are currently working on, most of them should be ready in the next couple of weeks:

  • Migration.
  • Snapshots.
  • Hot attach and detach NICs and disks.
  • LVM support.
  • Bandwidth limitation.

Known bugs:

  • VNC session only works with the first machine on each node. This is the reason for this behavior.

The driver has been released here. You can check there the installation guide for start using LXD with OpenNebula.

Contributions, feedback and issues are very much welcome by interacting with us in the GitHub repository or writing a mail:

  • Daniel Clavijo Coca: dann1telecom@gmail.com
  • José Manuel de la Fé Herrero: jmdelafe92@gmail.com
  • Sergio Vega Gutiérrez: sergiojvg92@gmail.com

Announcing the OpenNebula Champion Program

We would like to officially announce the OpenNebula Champion program. If you attended our last Conference, you’re probably aware that for the past couple of months, we’ve been designing a program for our community members who are interested in becoming Champions for OpenNebula around the world.

Champions are passionate volunteers who work to connect, teach and spread OpenNebula. They represent OpenNebula and help to teach and spread OpenNebula, throughout the world. In the project web site you have complete information what the program entails, how the project is planning to support its Champions and how to become an OpenNebula champion.

This is the list of the first ever Champions of the OpenNebula project:

champions

During the last years we have traveled all over the world delivering talks and hands-on workshops and organizing TechDays and Tutorials, where we have met a lot of amazing, enthusiastic, resourceful and engaged people. In order to continue with our journey, in a few days we will announce the schedule of OpenNebula Conferences, TechDays and Training for 2017. These events provide a great opportunity to raise awareness for the project and get more of you involved as contributors and users. As we scale the project to the next level, we need your help in spreading the message.

We look forward to your participation, and we would like to heartily thank our Champions and TechDay hosts!

New Maintenance Release 5.2.1

On behalf of the team we are happy to announce maintenance release 5.2.1. This version fixes some problems found since 5.2.0 and adds some small functionality.

New functionality for 5.2.1 has been introduced:

The following issues has been solved in 5.2.1:

Relevant Links

OpenNebula 5.4 – Enhancements in vCenter Integration

The development of OpenNebula 5.4 has already started, and the biggest focus for this release is improving the vCenter integration. In particular, two areas will be addressed, networking and storage, in order to bring the vCenter up to speed with the open source hypervisor, KVM. After this release there will be virtually no differences between managing VMs on VMware or on KVM.

Networking

The goal in the vCenter network integration is to be able to create Networks (ie Port Groups and Distributed Port Groups) from within OpenNebula, not just consume then. We will be using the same mechanism as in KVM, ie, configuring the underlying network dynamically as required by the Virtual Machines, including the management of VLAN IDs.

 

picture1

There is a clear benefit from this new feature, as it implies a great improvement to the provisioning model. Typical use cases involves administrators creating groups of users or even VDCs on a multi-tenancy environment, where network isolation is a must. Currently, the administrators are required to create beforehand either Port Groups or Distributed vSwitches in the vCenter in order to provide tenants with isolation.

The second step is a big enterprise, which will take more than just one release to accomplish, which  is to offer this functionality directly in OneFlow, with the ability to create new networks automatically whenever a service is launched. This will allow tenants to spin up network isolated multi-vm services without further management steps in vCenter or OpenNebula.

picture2

Storage

The aim here is to improve the vCenter storage drivers to be at the same level than the KVM counterpart, enabling:

  • System Datastores
  • Non persistent images

This again is a big enterprise that will take more than one release to accomplish.

This functionality will enable important features for the vCenter integration, like for instance the ability to allow the OpenNebula scheduler to intelligently choose the target system datastore (replacing Storage DRS!), support for storage quotas, disk resize both at boot and running time, and much more!

***

The Network and Storage import process will be greatly improved. Whenever a VM Template is imported, all the NICs and DISKs of the VM Template will be imported, and associated networks, datastores and images will be created.

Additionally, improvements in the driver efficiency will ensure the scalability of OpenNebula with very large scale infrastructures. The main focus will be time and memory efficiency, mainly in the monitoring operations.

OpenNebula roadmap is strictly based on your input. Thanks for your amazing feedback!

9 Years of OpenNebula!

nine

Today is the 9th anniversary of the founding of the OpenNebula open source project. We still remember the early days of OpenNebula, when we decided to start an open-source project to grow, distribute and support the software prototypes that we developed to support our research in virtualization management on distributed infrastructures. Yet here we are, nine years later, with one of the most successful projects in the open cloud ecosystem.

OpenNebula has grown into a widely-used user-driven project, with lots of examples of production use in academia, research and industry, over 100K downloads from the project repositories in the last year, more than 1,500 clouds connected to our marketplace, and large-scale infrastructures with tens of data centers and hundreds of thousands of cores. We are very proud of this growth as the only fully-open enterprise-ready management platform to build clouds.

None of it would be possible without your help, the hard work of thousands of users and contributors!

On behalf of the OpenNebula Team

How We Use OpenNebula at Fuze

fuze_logo
Fuze, headquartered in Cambridge, MA with additional locations in North America, Europe, Asia and Australia, is a global, cloud-based unified communications platform that empowers productivity and delivers insights across the enterprise through a single unified voice, video, and messaging application. Fuze allows the modern, mobile workforce to seamlessly communicate anytime, anywhere, across any device. With Fuze, customers have a single global carrier network leveraging a resilient private cloud infrastructure and QoS-engineered network to deliver the best enterprise-class IP PBX voice service available.

Given the company’s record momentum and growth, scaling the private cloud infrastructure to meet the global expansions and growth is a key to its customers’ satisfaction. In mid-2015, we deployed OpenNebula as part of the Fuze Private Cloud Management Stack to achieve a continuously reliable growing private cloud spanning Fuze global infrastructure.

Here’s a quick look at the key features of OpenNebula that powered Fuze Private Cloud:

  • Simple and lightweight deployment and easily extendable being open source.
  • Self-Serve capability through GUI (Sunstone), CLI and API.
  • Multi-tenancy built-in providing segregation among internal workloads.
  • Supporting Agile DevOps by abstracting underlying infrastructure and supporting mainstream hypervisors besides public cloud bursting.
  • Automated VM Orchestration via contextualization that personalize instances on instantiation time.
  • Service Orchestration provided by OneFlow and OneGate that support multi-tier applications using role based instances. Besides, OneFlow has the ability to apply auto-scaling and recovery rule support based on either generic or custom application attributes pushed via OneGate.

Currently, Fuze Private Cloud has a central OpenNebula deployment in a data center in the eastern part of the US. This fault tolerant OpenNebula deployment is configured to connect and manage Fuze data centers across US East/ West, Australia, Asia, and Europe.

cloud_orchestration_v2

VMware ESXi/vCenter 6.0 is the dominant virtualization technology used across its data centers backed by SAN-based storage and we also use Amazon Web Services. OpenNebula leverages the SOAP API offered by each DC vCenter to manage the entire global cloud infrastructure and present it to the different engineering teams as an effective single orchestration pane of glass. In addition, we connected OpenNebula to a few AWS Virtual Private Cloud (VPCs) with dedicated tenancy that it bursts into where applicable.

We are currently running OpenNebula 5.0 which brought in enhanced user experience through Sunstone and renamed lifecycle states along a redesigned Marketplace. We will upgrade to OpenNebula 5.2 soon to make use of the upgraded hybrid cloud drivers.

Overall, the OpenNebula project has taken Fuze Global Private Cloud to the next level and continues to be a fundamental factor for continuous innovation and customer satisfaction.

New OpenNebula VCLOUD driver: Building Hybrid Clouds with VMware cloud providers

Based in its definition, the “Hybrid Cloud Computing” is a model which combines the use of multiple Cloud services across different deployment models, including combining the use of services of public cloud services private cloud outside or inside organization / institution.

Most companies and organizations have not been born in the “cloud”, a situation that often causes the cloud resources are to be connected to traditional systems or applications with some criticality and are usually located in their own premises. This type of architecture is the most common where the keys to their success pass take into account aspects such as integration capabilities, hyper-converged management, etc.

Cloud bursting is always welcome!

Today we are one_vcloudsharing exciting news about the expansion of the number of public clouds supported by OpenNebula to build hybrid cloud deployments. As a result of the collaboration between OpenNebula and CSUC, a new addon to support VCLOUD providers has been added to OpenNebula catalogue.

“With this addon, real hybrid architectures can use OpenNebula’s rich set of infrastructure management tools to manage cloud deployments across VCLOUD private, public and hosted cloud platforms.”

 

The driver is developed for Opennebula 5.x and VCLOUD 5.5 version and is released today to be available for testing. The integration has been carried out using the ruby_vcloud_sdk, which interacts with the vCloud Director API, enabling a complete control of the lifecycle of Virtual Machines in a transparent way within an OpenNebula cloud. Thanks to these new addon, private resources can be easily supplemented with resources from external providers to meet fluctuating demands.

https://github.com/OpenNebula/addon-vcloud-driver

Description

This addon gives Opennebula the posibility to manage resources in VMware vCloud infraestructures. It includes virtualization and monitoring drivers.

This driver is based on vCenter Driver and uses a modified version of ruby_vcloud_sdk.

Alt text

Features

This addon has the following capabilities:

  • Deploy, stop, shutdown, reboot, save, suspend, resume and delete VM’s in the Virtual Data Centers hosted in vCloud.
  • Create, delete and revert snapshots of VM’s.
  • Change RAM and CPU values of VM.
  • It’s able to hot-attach and detach NICs to VM’s.
  • Automatized customization of the VMs instanciated.
  • Obtain monitoring information from the VDC, Datastore and VM’s.
  • In this development version we manage vApps with one VMs inside (A VM in OpenNebula equals a vApp with one VM in vCloud).
  • Each Virtual Data Center (VDC) in vCloud is managed as a Host in OpenNebula.
  • Import networks, hosts, templates and datastores hosted in vCloud using onevcloud script.

https://github.com/OpenNebula/addon-vcloud-driver

Need more information? You are welcome to use the OpenNebula community instruments to ask around (for instance, the forums tool is a good place to pose your questions) or reserve a seat to see details inside the next Open Cloud Free session in Barcelona (24/10 14:00h) https://www.eventbrite.com/e/open-cloud-free-session-inside-opennebulaconf-tickets-27753771277

As always, we value your feedback and contributions to this new feature!

Barcelona UserGroup Team –  www.cloudadmins.org

Managing Docker Hosts Deployments with Rancher and OpenNebula

Rancher is an open source software platform that enables organisations to deploy and manage container-based applications. Rancher supplies the entire software stack needed to manage containers in production using most of the commonly available container orchestration frameworks (Rancher Cattle, Docker Swarm, Kubernetes, Mesos).

Rancher has support for Docker Machine-based provisioning making it really easy to create Docker hosts on cloud providers. It creates servers, installs Docker on them, and configures the Docker client to talk to them. Using the Machine integration in Rancher, we can launch compute nodes directly from the Rancher UI.

Rancher recently has added support for docker-machine plugins, so it is possible to add Machine Drivers in order to create Docker hosts on any cloud providers.

This post will introduce Rancher and show how to launch OpenNebula Virtual Machines from the Rancher UI and provision them to run Docker compute hosts, which can then be used to run Docker containers. In the next steps we are going to install Rancher and use the OpenNebula docker-machine plugin to add virtual machine as hosts to Rancher environments.

Step 1 – Rancher Installation

Let’s first create a VM with docker, by using docker-machine

$ docker-machine create --driver opennebula rancher-server

Once the machine is created, we can install the rancher server

$ eval $(docker-machine env rancher-server)
$ docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

After about a minute, your host should be ready and you can browse to http://rancher-server-ip:8080 and bring up the Rancher UI. If you deploy the Rancher server on a VM with access to the to the Internet, it’s a good idea to set up access control (via github, LDAP …). For more information regarding the Rancher installation (single node and HA setup and the authentication) you can refer to the official documentation

Step 2 – Adding OpenNebula Machine Driver

In order to add OpenNebula Virtual Machines as hosts to Rancher we need to add the docker machine plugin binary in the Admin Machine Drivers settings.

rancheradminmachinedrivers

A Linux binary of the OpenNebula machine driver is available at https://github.com/OpenNebula/docker-machine-opennebula/releases/download/release-0.2.0/docker-machine-driver-opennebula.tgz.

 

addmachinedriverOnce you added the machine driver, a screen with the OpenNebula driver should be active.

opennebulamachinedriver

Step 3 – Adding OpenNebula Hosts

The first time adding a host, you will see a screen asking you to confirm the IP address your Rancher server is available on, i.e. where the compute nodes will connect.

addhostsettings

Once you save the settings, you can proceed to create the first Rancher host.

selectopennebuladriver

Select the opennebula driver and insert at least the following options:

  • Authentication: user, password
  • OpenNebula endpoint: xmlrpcurl (http://one:2633/RPC2)
  • ImageID
  • NetworkID

and then you can proceed to create the host. After few minutes, when the creation process is complete, you should get a screen with the active host.

activeopennebulahost

Step 4 – Deploy a container

To test the environment, you can select the host and add a container:

addcontainer

container

That’s all, we will be back soon with another post about the integration of Rancher and OneFlow to deploy multi-tier services on OpenNebula clouds. Stay tuned!

vOneCloud at VMworld 2016 EU in Barcelona

screen-shot-2016-09-26-at-16-38-44

Next 17th to 20th October the VMworld 2016 EU will be held in Barcelona, Spain. This is a must attend event where almost everyone with an interest in virtualization and cloud computing will be networking with industry experts.

The OpenNebula team will be present in the VMworld with a booth dedicated to showcase vOneCloud 2.0, the open source replacement for VMware vCloud. There will be a focus on new features like VMDK support, datastore and resource pool choosing and the Virtual Router functionality.

If you are planning to attend VMworld next mont, make sure you register and do not forget to come around our booth, E652. You will be able to see in a live demo how a VMware based infrastructure can be turned into a cloud with a slick, fully functional self-service portal to deliver a VM catalog to your end users, in 5 minutes!.

Open Cloud Free Session – Opennebula Barcelona User Group

Skyline-Barcelona211

 

Date and Time: Mon, October 24, 2016 2:00 PM – 5:00 PM

OpenNebula Barcelona User Group is a gathering of our users in Barcelona area to share best practices, discuss technical questions, network, and learn from each other and enjoy. Direct Link

Taking advantage of the Opennebula conference in Barcelona, its user group in collaboration with the Opennebula project and CSUC organizes a free open cloud session to introduce the project, share new local developments and use cases with the community and any people interested in Open Cloud topics (Free Registration).

Agenda: (Free Registration -> Register here and reserve your seat)

14:00 Welcome/Bienvenida/Benvinguda
14:05 Opennebula Project: Open Cloud in essence – Dr. Ruben Santiago Montero (Chief Technical Officer & Co-Founder)
14:30 Cloud Bursting and VMware: New Opennebula VCLOUD driver  – Jordi Guijarro (Cloud & Security Manager – CSUC)
14:50 Barcelona Users Group
15:00 ACB League use case – Joaquin Villanueva (Director of Media Technology)
15:20 UPC Research Lab (RDLAB) use case – Gabriel Verdejo (IT Manager)
15:40 University of Valencia use case – Israel Ribot (System Administrator)
16:00 Coffee & Networking
16:30 EOF

Free Registration -> Register here and reserve your seat

ONEBCN Team in collaboration with CSUC