Planning the Installation 4.0
This guide provides a high-level overview of an OpenNebula installation, so you can easily architect your deployment and understand the technologies involved in the management of virtualized resources and their relationship.
OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the hosts with the front-end.
The basic components of an OpenNebula system are:
The machine that holds the OpenNebula installation is called the front-end. This machine needs to have access to the storage Datastores (e.g. directly mount or network), and network connectivity to each host. The base installation of OpenNebula takes less than 10MB.
OpenNebula services include:
Requirements for the Front-End are:
Additionally you should be able to ssh passwordlessly to all the hosts (including itself, the frontend) from the frontend.
The hosts are the physical machines that will run the VMs. During the installation you will have to configure the OpenNebula administrative account to be able to ssh to the hosts, and depending on your hypervisor you will have to allow this account to execute commands with root privileges or make it part of a given group.
OpenNebula doesn't need to install any packages in the hosts, and the only requirements for them are:
Additionally you should be able to ssh passwordlessly to all the hosts (including itself and the frontend) from each host.
OpenNebula uses Datastores to handle the VM disk Images. VM Images are registered, or created (empty volumes) in a Datastore. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage.
When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an iSCSI target. Please check the the Stroage guide for more details.
In the following sections we will show the basic installation and configuration using Filesystem Datastores (file-based disk images) and a Shared FS of any kind to transfer the datastore images. This way you can take full advantage of the hypervisor capabilities (i.e. live migration), and typically better VM deployment times.
There are two configuration steps needed to perform a basic set up:
OpenNebula provides an easily adaptable and customizable network subsystem in order to better integrate with the specific network requirements of existing datacenters.
The network is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose.
To offer network connectivity to the VMs across the different hosts, the default configuration connects the virtual machine network interface to a bridge in the physical host. To make an effective use of your VM deployments you'll probably need to make one or more physical networks accessible to them. Please check the Networking guide to find out the networking technologies supported by OpenNebula.
For example, a typical host with two physical networks, one for public IP addresses (attached to eth0 NIC) and the other for private virtual LANs (NIC eth1) should have two bridges:
<xterm> $ brctl show bridge name bridge id STP enabled interfaces vbr0 8000.001e682f02ac no eth0 vbr1 8000.001e682f02ad no eth1 </xterm>
Proceed to the Installing the Software guide to install OpenNebula.