Planning the Installation 3.2

This guide provides a high-level overview of an OpenNebula installation, so you can easily architect your deployment and understand the technologies involved in the management of virtualized resources and their relationship.

OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the hosts with the front-end.

 high level architecture of cluster, its  components and relationship

The basic components of an OpenNebula system are:

Front-End

The machine that holds the OpenNebula installation is called the front-end. This machine needs to have access to the image repository storage (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB.

OpenNebula services include:

  • Management daemon (oned) and scheduler (mm_sched)
  • Monitoring and accounting daemon (onecctd)
  • Web interface server (sunstone)
  • Cloud API servers (ec2-query and/or occi)

:!: Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons

Requirements for the Front-End are:

  • ruby >= 1.8.7

Hosts

The hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group.

OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are:

  • ssh server running
  • hypervisor working properly configured
  • ruby >= 1.8.7

Image Repository & Storage

OpenNebula features an Image Repository to handle the VM Image files, the image repository has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage.

Images are transferred to the hosts to use them in the VMs. OpenNebula can handle multiple storage scenarios ( see the stroage overview for more details), in the following we assume that the front-end and hosts share a common storage area by means of a Shared FS of any kind. This way you can take full advantage of the hypervisor capabilities (i.e. live migration), and typically better VM deployment times.

 Storage Model : NFS

The Image Repository should be big enough to store the VM images for your private cloud. Usually, these are master images that are cloned (copied) when you start a VM instance. So you need to plan your storage requirements depending on the number of VMs you'll run in your virtual infrastructure.

Example: A 64 core cluster will typically run around 80VMs, each VM will require an average of 10GB of disk space. So you will need ~800GB for /var/lib/one, you will also want to store 10-15 master images so ~200GB for /var/lib/one/images. A 1TB /var/lib/one will be enough for this example setup.

:!: OpenNebula can work without a Shared FS. This will force the deployment to always clone the images and you will only be able to do cold migrations.

Networking

OpenNebula provides an easily adaptable and customizable network subsystem in order to better integrate with the specific network requirements of existing datacenters.

The network is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. Depending on your infrastructure, you may want to install a dedicated network for this purpose.

To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host. This is achieved with ethernet bridging, there are several guides that explain how to deal with this configuration, check for example the networking howto in the XEN documentation. To make an effective use of your VM deployments you'll probably need to make one or more physical networks accessible to them.

For example, a typical host with two physical networks, one for public IP addresses (attached to eth0 NIC) and the other for private virtual LANs (NIC eth1) should have two bridges:

<xterm> $ brctl show bridge name bridge id STP enabled interfaces vbr0 8000.001e682f02ac no eth0 vbr1 8000.001e682f02ad no eth1 </xterm>

:!: You should create bridges with the same name in all the hosts. Depending on the network model, OpenNebula will dynamically create network bridges, please check the networking overview for more details.

Next Steps

Proceed to the Installing the Software guide to install OpenNebula.