This Installation & Configuration Guide aims to show how to install and configure OpenNebula.
The ONE Server machine needs to have installed the following software:
These packages are only needed if you want to rebuild template parsers:
Most of this software is already packaged in linux distributions. Here are the packages needed in a debian lenny.
Provisioning hosts need to have installed:
And, depending on the chosen hypervisor:
Depending on the hypervisor you are planning to use you may need to meet additional requirements. Please check the guides for each virtualizer:
It is necessary that the ONE Server and all the hosts:
<oneadmin>
). This can be done using NIS
.trust each other in the ssh scope. This means that all the hosts have to trust the public key of the common user (the one administrator). Public keys needs to be generated in the ONE server and then copied into all the cluster hosts so they trust the ONE server without prompting for a password.
Also, we recommend the use of ssh-agent to keep the private key encrypted in the ONE server. A good tutorial on howto do this easily can be found in here
<oneadmin>
user may need to execute commands with root privileges to interact with the hypervisor. Please check the driver configuration guides to complete the user configuration process:
The ONE server and the cluster hosts have to share directories using, for example, NFS
. The filesystem layout for the cluster has to conform to the definitions below:
$ONE_LOCATION
: Path to the ONE installation.$ONE_LOCATION/var
: Directory containing log files and directories for the different VMs. This directory needs to be shared, exported by the ONE server and mounted by all the cluster hosts. In the ONE server, it corresponds to $ONE_LOCATION/var
. If this directory is mounted in the remote hosts in a different point than $ONE_LOCATION/var
, you would need to set the VM_RDIR variable in the one configuration file.$ONE_LOCATION/var/<VID>
: Home directory for the VM with Identifier=<VID>. The system will create the following files in this location:
The ONE software needs to be installed on a machine that needs to export (at least) the $ONE_LOCATION
folder using NFS
. This is necessary for the checkpointing feature to work. There is an known issue regarding sqlite
and NFS
, please see the Release Notes for more info.
Follow these simple steps to install the ONE server
scons OPTION=VALUE
the argument expression [OPTIONAL] is used to set non-default paths for :
OPTION | VALUE |
---|---|
sqlite | path-to-sqlite-install |
xmlrpc | path-to-xmlrpc-install |
parsers | yes if you want to rebuild flex/bison files |
./install.sh <destination_folder>
Not known issues.
Not known issues.
Not known issues.
Not known issues.
Centos does not come with needed versions of following packages:
Here are the instructions on how to install them.
The version that comes with Centos is 0.96 and it is not compatible with our build scripts. To install version 0.98 you can download the RPM at:
http://www.scons.org/download.php
Tested with scons-0.98.5-1.noarch.rpm.
To install xmlrpc-c there is an apt repository with needed packages. You can create a new file in /etc/apt/sources.list.d containing this line:
repomd http://centos.karan.org el5/extras/testing/i386/RPMS
After that you need to update apt database and install these two packages:
$ apt-get install xmlrpc-c xmlrpc-c-devel
This package should be installed from source, you can download the tar.gz from http://www.sqlite.org/download.html. It was tested with sqlite 3.5.9.
If you do not install it to a system wide location (/usr or /usr/local) you need to add LD_LIBRARY_PATH and tell scons where to find the files:
$ scons sqlite=<path where you installed sqlite>
The configuration file is called oned.conf
and it is placed inside the etc
directory of the $ONE_LOCATION
, which in turn is the directory where OpenNebula is installed.
In this file the next aspects of oned can be defined:
$ONE_LOCATION/var
, so this must be shared between the ONE server and the remote nodes. If the mount point of $ONE_LOCATION/var
has a different path in the remote nodes than in the ONE server, set here the mount point of the _remote_ nodes$ONE_LOCATION/var/oned.log
log file. Possible values are:0 | ERROR |
---|---|
1 | WARNING |
2 | INFO |
3 | DEBUG |
$ONE_LOCATION
$ONE_LOCATION
An example of a complete oned.conf
for a ONE that is going to use the XEN hypervisor is shown below.
# Time in seconds between host monitorization HOST_MONITORING_INTERVAL=10 # Time in seconds between virtual machine monitorization VM_POLLING_INTERVAL=10 # Sets the verbosity of $ONE_LOCATION/var/oned.log DEBUG=3 # Information manager configuration. IM_MAD=[name="im_xen",executable="bin/one_im_ssh",arguments="etc/im_xen/im_xen.conf",default="etc/im_xen/im_xen.conf"] # Virtual Machine Manager configuration. VM_MAD=[name="vmm_xen",executable="bin/one_vmm_xen",default="etc/vmm_xen/vmm_xen.conf",type="xen"] # Port where oned will listen for xmlrpc calls. PORT=2633
Currently, ONE supports three different set of drivers. In order to configure them, please take a look at the corresponding driver configuration guide:
Drivers are separate processes that communicate with the ONE core using an internal ASCII protocol. Before loading the driver, two run commands (RC) files are sourced to optionally obtain environmental variables and perform tasks described in shell script.
These two RC files are:
$ONE_LOCATION/etc/mad/defaultrc
. Global environment and tasks for all the drivers. Variables are defined in the following fashion:ATTRIBUTE=VALUE
and, upon read, exported to the environment. This attributes are set for all the drivers, and could be superseded by the same attribute present in the driver's own specific RC file. Common attributes suitable to be set for all the drivers are:
# Debug for MADs. # If set, MADs will generate cores and logs in $ONE_LOCATION/var. # Possible values are [0=ERROR, 1=DEBUG] ONE_MAD_DEBUG= # Nice Priority to run the drivers PRIORITY=19
The only out-of-the-box default value set in this file is the PRIORITY, and as seen above, is set to 19.
In order to use Open Nebula, you need to set the following variables:
ONE_LOCATION | pointing to <destination_folder> |
---|---|
ONE_XMLRPC | http://localhost:2633/RPC2 |
PATH | $ONE_LOCATION/bin:$PATH |
The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. The ONE architecture defines this module as a separate process that can be started independently of oned
. The ONE scheduling framework is designed in a generic way, so it is highly modifiable. ONE comes with a match making
scheduler (mm_sched) that implements the Rank Scheduling Policy.
You can start oned
without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm
command.
The Haizea lease manager can also be used as a scheduling module in OpenNebula. Haizea allows OpenNebula to support advance reservation of resources and queuing of best effort requests (more generally, it allows you to lease your resources as VMs, with a variety of lease terms). The Haizea documentation includes a guide on how to use OpenNebula and Haizea to manage VMs on a cluster
The goal of this policy is to prioritize those resources more suitable for the VM. First those hosts that do not meet the VM requirements (see the ''REQUIREMENTS'' attribute) and do not have enough resources (available CPU and memory) to run the VM are filtered out. The ''RANK'' expression is evaluated upon this list to sort it. Those resources with a higher rank are used first to allocate VMs.
Rank and requirement expressions are build using any of the attributes provided by the IM (e.g. ARCH, FREECPU…). Check the IM Driver Configuration Guide to extend the ONE information model.
Note that there is a difference between
Once the environment is correctly set up, we have to let ONE know about which resources it can use. In other words, we have to set up the cluster.
But firsts things first, we need to start the ONE daemon and the scheduler. You can do them both by issuing the following command as the <oneadmin>
user:
$> one start
Now we should have running two process:
oned
: Core process, attends the CLI requests, manages the pools and all the componentsmm_sched
: Scheduler process, in charge of the VM-HOST matching If those process are running, you should see content in their log files:
$ONE_LOCATION/var/oned.log
$ONE_LOCATION/var/sched.log
Once we made sure that both processes are running, let's set up the cluster in ONE. First thing is adding hosts to ONE. This can be done by means of the onehost
command (See the User Guide for more information). So let's add one host:
$> onehost add host1.mydomain.org im_xen vmm_xen
We are giving ONE hints about what it needs in order to run VMs in our cluster hosts. More details about it in the Command Line Interface.
Once the ONE software is installed, the next tree should be found under $ONE_LOCATION
:
To verify the installation, we recommend to follow the steps in the QuickStart guide, from this step onwards. Before tacking it, please make sure that your environment is correctly set.
There are different log files corresponding to different ONE components:
$ONE_LOCATION/var/oned.log
. All problems related with DB access, component communication, command line invocations and so on will be stated here. Also, in this file will be stored all the information that is not specific to the Scheduler or the VM life-cycle. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf
.$ONE_LOCATION/var/<VID>/vm.log
. Information related to this VM would be dumped into this file.$ONE_LOCATION/var/name-of-the-driver-executable.log
.