Planning the Installation 4.0

This guide provides a high-level overview of an OpenNebula installation, so you can easily architect your deployment and understand the technologies involved in the management of virtualized resources and their relationship.

OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the hosts with the front-end.

 high level architecture of cluster, its  components and relationship

The basic components of an OpenNebula system are:

Front-End

The machine that holds the OpenNebula installation is called the front-end. This machine needs to have access to the storage Datastores (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB.

OpenNebula services include:

  • Management daemon (oned) and scheduler (mm_sched)
  • Monitoring and accounting daemon (onecctd)
  • Web interface server (sunstone)
  • Cloud API servers (ec2-query and/or occi)

:!: Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons

Requirements for the Front-End are:

  • ruby >= 1.8.7

Additionally you should be able to ssh passwordlessly to all the hosts (including itself, the frontend) from the frontend.

Hosts

The hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group.

OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are:

  • ssh server running
  • hypervisor working properly configured
  • ruby >= 1.8.7

Additionally you should be able to ssh passwordlessly to all the hosts (including itself and the frontend) from each host.

Storage

OpenNebula uses Datastores to handle the VM disk Images. VM Images are registered, or created (empty volumes) in a Datastore. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage.

When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an iSCSI target. Please check the the Stroage guide for more details.

In the following sections we will show the basic installation and configuration using Filesystem Datastores (file-based disk images) and a Shared FS of any kind to transfer the datastore images. This way you can take full advantage of the hypervisor capabilities (i.e. live migration), and typically better VM deployment times.

There are two configuration steps needed to perform a basic set up:

  • First, you need to configure the system datastore to hold images for the running VMs, check the the System Datastore Guide, for more details.
  • Then you have to setup one ore more datastore for the disk images of the VMs, you can find more information on setting up Filesystem Datastores here.

:!: OpenNebula can work without a Shared FS. This will force the deployment to always clone the images and you will only be able to do cold migrations.

Networking

OpenNebula provides an easily adaptable and customizable network subsystem in order to better integrate with the specific network requirements of existing datacenters.

The network is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose.

To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host. To make an effective use of your VM deployments you'll probably need to make one or more physical networks accessible to them. Please check the Networking guide to find out the networking technologies supported by OpenNebula.

:!: You should create bridges with the same name in all the hosts. Depending on the network model, OpenNebula will dynamically create network bridges, please check the networking overview for more details.

For example, a typical host with two physical networks, one for public IP addresses (attached to eth0 NIC) and the other for private virtual LANs (NIC eth1) should have two bridges:

<xterm> $ brctl show bridge name bridge id STP enabled interfaces vbr0 8000.001e682f02ac no eth0 vbr1 8000.001e682f02ad no eth1 </xterm>

Next Steps

Proceed to the Installing the Software guide to install OpenNebula.