Borja Sotomayor has just announced the release of a new version of the Haizea Lease Manager. Technology Preview 1.3 now includes support for OpenNebula 1.2 (released one week ago), and enhanced stability and robustness. This is a new step towards TP2.0, which will include a policy engine and several novel scheduling features. The detailed list of changes is available in the project changelog.
So, do you want to transform your rigid and compartmented infrastructure, into a flexible and agile platform where you can dynamically deploy new services and adjust their capacity?. If the answer is yes, you want to build what is nowadays called a private cloud.
In this mini-howto, you will learn to setup such private cloud in 5 steps with Ubuntu and OpenNebula. We assume that your infrastructure follows a classical cluster-like architecture, with a front-end (cluster01, in the howto) and a set of worker nodes (cluster02 and cluster03).
First you’ll need to add the following PPA if you’re running Ubuntu 8.04 LTS (Hardy) or Ubuntu 8.10 (Intrepid), you should be fine if you are running Ubuntu 9.04 (Jaunty Jackalope):
deb http://ppa.launchpad.net/opennebula-ubuntu/ppa/ubuntu intrepid main
If everything is set up correctly you could see the OpenNebula packages:
$ apt-cache search opennebula libopennebula-dev - OpenNebula client library - Development libopennebula1 - OpenNebula client library - Runtime opennebula - OpenNebula controller opennebula-common - OpenNebula common files opennebula-node - OpenNebula node
OK, then. Here we go:
- [Front-end (cluster01)] Install the opennebula package:
$ sudo apt-get install opennebula Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: opennebula-common The following NEW packages will be installed: opennebula opennebula-common 0 upgraded, 2 newly installed, 0 to remove and 2 not upgraded. Need to get 280kB of archives. After this operation, 1352kB of additional disk space will be used. Do you want to continue [Y/n]? ... Setting up opennebula-common (1.2-0ubuntu1~intrepid1) ... ... Adding system user `oneadmin' (UID 107) ... Adding new user `oneadmin' (UID 107) with group `nogroup' ... Generating public/private rsa key pair. ... Setting up opennebula (1.2-0ubuntu1~intrepid1) ... oned and scheduler started
As you may see from the output of the apt-get command performs several configuration steps: creates a oneadmin account, generates a rsa key pair, and starts the OpenNebula daemon.
- [Front-end (cluster01)] Add the cluster nodes to the system. In this case, we’ll be using KVM and no shared storage. This simple configuration should work out-of-the-box with Ubuntu:
$ onehost add cluster02 im_kvm vmm_kvm tm_ssh Success! ... $ onehost add cluster03 im_kvm vmm_kvm tm_ssh Success! ...
Now, we have just to follow the wise output of the previous commands
- [Front-end (cluster01)] You need to add the cluster nodes to the known hosts list for the oneadmin user:
sudo -u oneadmin ssh cluster02 The authenticity of host 'cluster02 (192.168.16.2)' can't be established. RSA key fingerprint is 37:41:a5:0c:e0:64:cb:03:3d:ac:86:b3:44:68:5c:f9. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'cluster02,192.168.16.2' (RSA) to the list of known hosts. oneadmin@cluster02's password:
You don’t need to actually login.
- [Worker Node (cluster02,cluster03)] Install the OpenNebula Node package:
$ sudo apt-get install opennebula-node Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: opennebula-common The following NEW packages will be installed: opennebula-common opennebula-node ... Setting up opennebula-node (1.2-0ubuntu1~intrepid1) ... Adding user `oneadmin' to group `libvirtd' ... Adding user oneadmin to group libvirtd Done.
Note that oneadmin is also created at the nodes (no need for NIS here) and added to the libvirtd, so it can manage the VMs.
- [Worker Node (cluster02,cluster03)] Trust the oneadmin user at the ssh scope, just copy the command from the onehost output ;) :
$ sudo tee /var/lib/one/.ssh/authorized_keys << EOT > ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAm9n0E4bS9K8NUL2bWh4F78LZWDo8uj2VZiAeylJzct ...7YPgr+Z4+lhOYPYMnZIzVArvbYzlc7HZxczGLzu+pu012a6Mv4McHtrzMqHw== oneadmin@cluster01 > EOT
You may want to check that you can login the cluster nodes from the front-end, using the oneadmin account without being asked for a password:
$ sudo -u oneadmin ssh cluster02
You are done! You have your own cloud up and running! Now, you should be able to see your nodes ready to start VMs:
$ onehost list HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT 0 cluster02 0 100 100 100 963992 902768 on 1 cluster03 0 100 100 100 963992 907232 on
You may want to check the OpenNebula documentation for further information on configuring the system or to learn how to specify virtual machines.
The OpenNebula team is happy to announce the availability of OpenNebula 1.2, the second stable release of the project. This is an important milestone for the project and marks that most of the components of OpenNebula are now in place.
What is OpenNebula?
The OpenNebula virtual infrastructure engine provides efficient, dynamic and scalable management of groups of interconnected VMs within datacenters involving a large amount of virtual and physical servers. OpenNebula supports Xen and KVM platforms and can interface with remote cloud sites, being the only tool able to access on-demand to Amazon EC2 to dynamically scale the local infrastructure based on actual usage. OpenNebula also exhibits an open and flexible architecture which allows the definition of new algorithms for virtual machine placement, and its integration with any virtualization platform, infrastructure cloud offering and third-party component in the cloud ecosystem, such as cloud-like remote interfaces, virtual image managers, and service managers. OpenNebula is one of the components being enhanced in the context of the European Union’s Reservoir Project, which aims to develop the open source technology to enable deployment and management of complex IT services across different administrative domains.
New Features and Highlights in OpenNebula 1.2
OpenNebula 1.2 presents important improvements in the following areas:
- Image Management. OpenNebula 1.2 features a general mechanism to transfer and clone VM images. The new Transfer Manager is a modular component that embrace the driver-based design of OpenNebula, so it can be easily extended and integrated with third-party developments and virtually any cluster storage architecture. The new TM allows you to re-use VM images as you can mark them as clonable, and also you can save space as swap images are now created on-the-fly by OpenNebula.
- Networking. With OpenNebula 1.0 it is difficult to track the MAC/IPs in use by the running VMs, and its association with physical networks. This mechanism does not scale when dealing with tens of VMs. The new Virtual Network Manager module allows you to define virtual networks and it leases IP-MAC pairs to VMs, so you do not have to keep track of the addresses in use. In this way it is pretty much like an embedded DHCP server. Additionally, the leases are built in such a way that you can easily obtain the IP form the MAC when booting the VM. OpenNebula 1.0 networking is still supported, in case you do not want to use the new functionality.
- Robustness and scalability, OpenNebula 1.2 has been tested in the management of hundreds of running VMs to ensure that the code meets production level requirements.
Getting OpenNebula 1.2
The complete source tree for OpenNebula can be freely downloaded. Additionally the ubuntu virtualization team have kindly provided binary packages for Ubuntu 9.04 (Jaunty Jackalope).
Please refer to the OpenNebula documentation guides to install and configure your system. More information:
- Benefits and Features
- Complete Release Notes of OpenNebula 1.2
- Download OpenNebula 1.2
- Installation, Configuration and User Guides
- OpenNebula FAQ
The OpenNebula team would like to thank everyone that sent comments, reported bugs and provide patches. It definitely helped to get a better OpenNebula 1.2. Specially, we would like to thank the great labour done by Soren Hansen, reflected in submitted patches that helped to get OpenNebula closer to work with Ubuntu and Debian.
Nowadays it is difficult to find a place where virtualization benefits are not praised. One of my favorite ones is consolidation, that has finish with the “one application one server” paradigm. However, when you start placing your services on different hosts, you will find quickly that the new model does not scale per se. Soon (ten VMs is enough), you’ll be trying to answer questions like: Where did I put our web server? Is MAC address already in use? Is this the updated image for the ubuntu server? and the alike.
That was our situation last year; although we organized ourselves with a wiki and a database to store all the information, it became evident that we need a management layer on top of the hypervisors. We started the development of OpenNebula to implement our ideas about the distributed management of Virtual Machines.
Today OpenNebula is a reality, and has reach a production quality status (at least we are using it in production). I’ll briefly show you how we setup our infrastructure with OpenNebula, KVM and Xen.
The Physical Cluster
Our infrastructure is a 8-blade cluster, each one with 8GB and two quad-core CPUs (8 cores/blade). These blades are connected with a private LAN an to the Internet, so they can host public services. The output of the
onehost list command is as follows – it’s been colorized for dramatic purposes ;) :
Originally the blades were configured as a classical computing cluster.
The Virtual Infrastructure
Now, thanks to OpenNebula the cluster can be easily configured to host multiple services. For example we are using the 8 blades for:
- Two grid clusters (4 nodes each) for our grid projects. Needless to say, that we can add/remove nodes to these clusters as we need them
- A Grid broker based on the Gridway metacheuler
- A web server, serving this page actually
- A coupe of teaching clusters for our master students, so they can break anything
- A cluster devoted to the RESERVOIR project
- And my favorite, a cluster were we develop OpenNebula. The nodes of this cluster are Xen capable, and they run as KVM virtual machines. So yes, we are running VMs in Xen nodes which are KVM VMs.
This is the output of the
onevm list command:
One of the most useful features of the new version of OpenNebula is the networking support. You can define several networks, and OpenNebula will lease you MAC addresses from the network you want. So you do not have to worry about colliding addresses. The MAC addresses are assigned using a simple pattern so you can derive the IP from the MAC (see the OpenNebula Networking guide for more info).
We are using a virtual network for each virtual cluster, output from
onevnet list and
onevnet show commands:
Thanks to the hypervisors and OpenNebula, we can shape our 8 blades to be any combination of services, and we can resize these services to suite our needs. As can be seen above, you can easily find where is the webserver running, so you can get a console in case you need one, or migrate the VM to other blade…
Although the management is now so easy with OpenNebula, setting up everything (installing the physical blades, the VMs, configure the network…) is a difficult task. Kudos to Javi and Tino for getting the system running!.
SYS-CON’s Cloud Computing Journal has just expanded its list of most active players in the fast-emerging Cloud Ecosystem. The list includes the most active cloud players, which are driving the most Enterprise-relevant innovation. Cloud computing is an opportunity for organizations to implement low cost, low power and high efficiency systems to deliver scalable infrastructure.
Last month I’ve been invited to give a couple of talks about Cloud computing in the wonderful C3RS (Cisco Cloud Computing Research Symposium) and in a Spanish e-science meeting. (The slides are available online, if you want to check). Although the audiences were quite heterogeneous, there is a recurrent question among the participants of these events: How can I set my private cloud?. Let me briefly summarize the motivation of the people asking this:
- Lease compute capacity from the local infrastructure. These people acknowledge the benefits of virtualizing their own infrastructure as a whole. However, they are not interested, in selling this capacity over the internet, or at least is not a priority for them. This is, they do not want to become a EC2 competitor, so they do not need to expose to the world a cloud interface.
- Capacity in the cloud. They do not want to be the new EC2 but they want to use EC2. The ability of moving some services, or part of the capacity of a service, to an external provider is very attractive to them.
- Open Source. Current cloud solutions are proprietary and closed, they need an open source solution to play with. Also, they are using some virtualization technologies that would like to see integrated in the final solution.
I say to these people, take a look to OpenNebula. OpenNebula is a distributed virtual machine manager that allows you to virtualize your infrastructure. It also features an integral management of your virtual services, including networking and image management. Additionally, it is shipped with EC2 plug-ins that allow you to simultaneously deploy virtual machines in your local infrastructure and in Amazon EC2.
OpenNebula is modular-by-design to allow its integration with any other tool, like the Haziea lease manager, or Nimbus that gives you a EC2 compatible interface in case you need one. It is a healthy open source software being improved in several projects like RESERVOIR, and it has a growing community.
MIT Technology Review has just published an interesting article entitled “Openning the Cloud” about open-source technological components to build a cloud-like infrastructure. The article focuses on the IaaS (Infrastructure as a Service) paradigm, describing the components required to develop a solution to provide virtualized resources as a service. The article briefly describes the following technologies: OpenNebula, Globus Nimbus, and Eucalyptus.
In the OpenNenula project, we strongly believe that a complete Cloud solution requires the integration of several of the available components, with each component focused on a niche. The open architecture and interfaces of the OpenNevula VM Manager allow its integration with third-party tools, such as capacity managers, cloud interfaces, service adapters, VM image managers…; so supporting a complete solution for the deployment of flexible and efficient virtual infrastructures. We maintain an Ecosystem web page with information about third-party tools to extend the functionality provided by OpenNebula.
Virtualization has opened up avenues for new resource management techniques within the data center. Probably, the most important characteristic is its ability to dynamically shape a given hardware infrastructure to support different services with varying workloads. Therefore, effectively decoupling the management of the service (for example a web server or a computing cluster) from the management of the infrastructure (e.g. the resources allocated to each service or the interconnection network).
A key component in this scenario is the virtual machine manager. A VM manager is responsible for the efficient management of the virtual infrastructure as a whole, by providing basic functionality for the deployment, control and monitoring of VMs on a distributed pool of resources. Usually, these VM managers also offer high availability capabilities and scheduling policies for VM placement and physical resource selection. Taking advantage of the underlying virtualization technologies and according to a set of predefined policies, the VM manager is able to adapt the physical infrastructure to the services it supports and their current load. This adaptation usually involves the deployment of new VMs or the migration of running VMs to optimize their placement.
The dsa-research group at the Universidad Complutense de Madrid has released under the terms of the Apache License, Version 2.0, the first stable version of the OpenNebula Virtual Infrastructure Engine. OpenNebula enables the dynamic allocation of virtual machines on a pool of physical resources, so extending the benefits of existing virtualization platforms from a single physical resource to a pool of resources, decoupling the server not only from the physical infrastructure but also from the physical location. OpenNebula is a component being enhanced within the context of the RESERVOIR European Project.
The new VM manger differentiates from existing VM managers in its highly modular and open architecture designed to meet the requirements of cluster administrators. OpenNebula 1.0 supports Xen and KVM virtualization platforms to provide several features and capabilities for VM dynamic management, such as centralized management, efficient resource management, powerful API and CLI interfaces for monitoring and controlling VMs and physical resources, fault tolerant design… Two of the outstanding new features are its support for advance reservation leases and on-demand access to remote cloud provider
Support for Advance Reservation Leases
Haizea is an open source lease management architecture that OpenNebula can use as a scheduling backend. Haizea uses leases as a fundamental resource provisioning abstraction, and implements those leases as virtual machines, taking into account the overhead of using virtual machines (e.g., deploying a disk image for a VM) when scheduling leases. Using OpenNebula with Haizea allows resource providers to lease their resources, using potentially complex lease terms, instead of only allowing users to request VMs that must start immediately.
Support to Access on-Demand to Amazon EC2 resources
Recently, virtualization has also brought about a new utility computing model, called cloud computing, for the on-demand provision of virtualized resources as a service. The Amazon Elastic Compute Cloudi s probably the best example of this new paradigm for the elastic capacity providing. Thanks to virtualization, the clouds can be used efficiently to supplement local capacity with outsourced resources. The joint use of these two technologies, VM managers and clouds, will change arguably the structure and economics of current data centers. OpenNebula provides support to access Amazon EC2 resources to supplement local resources with cloud resources to satisfy peak or fluctuating demands.
Scale-out of Computing Clusters with OpenNebula and Amazon EC2
As use case to illustrate the new capabilities provided by OpenNebula, the release includes documentation about the application of this new paradigm (i.e. the combination of VM managers and cloud computing) to a computing cluster, a typical data center service. The use of a new virtualization layer between the computing cluster and the physical infrastructure extends the classical benefits of VMs to the computing cluster, so providing cluster consolidation, cluster partitioning and support for heterogeneous workloads. Moreover, the integration of the cloud in this layer allows the cluster to grow on-demand with additional computational resources to satisfy peak demands.
The dsa-research group is pleased to announce that a stable release (v1.0) of the OpenNebula (ONE) Virtual Infrastructure Engine is available for download under the terms of the Apache License, Version 2.0. ONE enables the dynamic allocation of virtual machines on a pool of physical resources, so extending the benefits of existing virtualization platforms from a single physical resource to a pool of resources, decoupling the server not only from the physical infrastructure but also from the physical location.
The OpenNebula Virtual Infrastructure Engine differentiates from existing VM managers in its highly modular and open architecture designed to meet the requirements of cluster administrators. The last version supports Xen and KVM virtualization platforms to provide the following features and capabilities:
- Centralized management, a single access point to manage a pool of VMs and physical resources.
- Efficient resource management, including support to build any capacity provision policy and for advance reservation of capacity through the Haizea lease manager
- Powerful API and CLI interfaces for monitoring and controlling VMs and physical resources
- Easy 3rd party software integration to provide a complete solution for the deployment of flexible and efficient virtual infrastructures
- Fault tolerant design, state is kept in a SQLite database.
- Open and flexible architecture to add new infrastructure metrics and parameters or even to support new Hypervisors.
- Support to access Amazon EC2 resources to supplement local resources with cloud resources to satisfy peak or fluctuating demands.
- Ease of installation and administration on UNIX clusters
- Open source software released under Apache license v2.0
- As engine for the dynamic management of VMs, OpenNebula is being enhanced in the context of the RESERVOIR project (EU grant agreement 215605) to address the requirements of several business use cases.
More details at http://www.opennebula.org/doku.php?id=documentation:rn-rel1.0
- Benefits and Features: http://www.opennebula.org/doku.php?id=about
- Documentation: http://www.opennebula.org/doku.php?id=documentation
- Release Notes: http://www.opennebula.org/doku.php?id=documentation:rn-rel1.0
- Download: http://www.opennebula.org/doku.php?id=software
- Ecosystem: http://www.opennebula.org/doku.php?id=ecosystem
I would like to give a warm welcome to Haizea to the virtualization ecosystem. The new technological component is an open-source VM-based lease management architecture, which can be used
- As a platform for experimenting with scheduling algorithms that depend on VM deployment or on the leasing abstraction.
- In combination with the OpenNebula virtual infrastructure manager, to manage a Xen or KVM cluster, allowing you to deploy different types of leases that are instantiated as virtual machines (VMs).
Its full integration with OpenNebula will be part of the next Technoloy Preview (TP1.1), due mid-july. Haizea is being developed by Borja Sotomayor, a PhD student at the University of Chicago, who is now visiting our research group partially funded by the European Union’s FP7 Reservoir project (“Resources and Services Virtualization without Barriers”).
MEET US AT
TechDay Prague, 7 March 2017
Training Madrid, 14-15 March 2017
TechDay Nuremberg, 30 March 2017
TechDay Madrid, 9 May 2017
TechDay Barcelona, 23 May 2017
ONE Conf US, Boston MA, 19-20 June 2017
Training Boston, MA, 21-22 June 2017
Training Madrid, 4-5 July 2017
TechDay Vancouver, 1 September 2017
TechDay Ede NL, 20 September 2017
ONE Conf EU, Madrid, 23-24 October 2017
Training Madrid, 25-26 October 2017
TechDay Hluboka CZ, 7 November 2017
Training Boston MA, 14-15 November 2017
TechDay Almendralejo ES, 28 November 2017