Building a Private Cloud

What is a Private Cloud?

The aim of a Private Cloud is not to expose to the world a cloud interface to sell capacity over the Internet, but to provide local users with a flexible and agile private infrastructure to run virtualized service workloads within the administrative domain. OpenNebula virtual infrastructure interfaces expose user and administrator functionality for virtualization, networking, image and physical resource configuration, management, monitoring and accounting.

The User View

An OpenNebula Private Cloud provides infrastructure users with an elastic platform for fast delivery and scalability of services to meet dynamic demands of service end-users. Services are hosted in VMs, and then submitted, monitored and controlled in the Cloud by using the virtual infrastructure interfaces:

  • Command line interface
  • XML-RPC API
  • Libvirt virtualization API or any of its management tools

Lets do a sample session to illustrate the functionality provided by the OpenNebula CLI for Private Cloud Computing. First thing to do, check the hosts in the physical cluster:

<xterm> $ onehost list HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT

 0 host01                      0    800    800    800 8194468 7867604   on
 1 host02                      0    800    797    800 8387584 1438720   on

</xterm>

We can then submit a VM to OpenNebula, by using onevm. We are going to build a VM template to submit the image we had previously placed in the /opt/nebula/images directory.

CPU    = 0.5
MEMORY = 128
OS     = [
  kernel   = "/boot/vmlinuz-2.6.18-4-xen-amd64",
  initrd   = "/boot/initrd.img-2.6.18-4-xen-amd64",
  root     = "sda1" ]
DISK   = [
  source   = "/opt/nebula/images/disk.img",
  target   = "sda1",
  readonly = "no" ]
DISK   = [
  type     = "swap",
  size     = 1024,
  target   = "sdb"]
NIC    = [ NETWORK = "Public VLAN" ]

Once we have tailored the requirements to our needs (specially, CPU and MEMORY fields), ensuring that the VM fits into at least one of both hosts, let's submit the VM (assuming you are currently in your home folder):

<xterm> $ onevm submit myfirstVM.template </xterm>

This should come back with an ID, that we can use to identify the VM for monitoring and controlling, again through the use of the onevm command:

<xterm> $ onevm list

ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin    one-0 runn   0   65536          host01  00 0:00:02

</xterm>

The STAT field tells the state of the virtual machine. If there is an runn state, the virtual machine is up and running. Depending on how we set up the image, we may be aware of it's IP address. If that is the case we can try now and log into the VM.

To perform a migration, we use yet again the onevm command. Let's move the VM (with VID=0) to host02 (HID=1):

<xterm> $ onevm livemigrate 0 1 </xterm>

This will move the VM from host01 to host02. The onevm list shows something like the following:

<xterm> $ onevm list

ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin    one-0 runn   0   65536          host02  00 0:00:06

</xterm>

How the System Operates

OpenNebula does the following:

  • Manages Virtual Networks. Virtual networks interconnect VMs. Each Virtual Networks includes a description.
  • Creates VMs. The VM description is added to the database.
  • Deploys VMs. According to the allocation policy, the scheduler decides where to execute the VMs.
  • Manages VM Images. Before execution, VM images are transferred to the host and swap disk images are created. After execution, VM images may be copied back to the repository.
  • Manages Running VMs. VM are started, periodically polled to get their consumption and state, and can be shutdown, suspended, stopped or migrated.

The main functional components of an OpenNebula Private Cloud are the following:

  • Hypervisor: Virtualization manager installed in the resources of the cluster that OpenNebula leverages for the management of the VMs within each host.
  • Virtual Infrastructure Manager: Centralized manager of VMs and resources, providing virtual network management, VM life-cycle management, VM image management and fault tolerance.
  • Scheduler: VM placement policies for balance of workload, server consolidation, placement constraints, affinity, advance reservation of capacity and SLA commitment.