Installation & Configuration Guide for ONE 1.0

This Installation & Configuration Guide aims to show how to install and configure OpenNebula.

Requirements

Frontend

The ONE Server machine needs to have installed the following software:

  • ruby >= 1.8.5
  • sqlite3 >= 3.5.2
  • sqlite3-dev >= 3.5.6-3
  • sqlite3-ruby
  • libxmlrpc-c >= 1.06
  • scons >= 0.97
  • g++ >= 4

These packages are only needed if you want to rebuild template parsers:

  • flex >= 2.5
  • bison >= 2.3

Most of this software is already packaged in linux distributions. Here are the packages needed in a debian lenny.

  • ruby: ruby
  • sqlite3: libsqlite3-0, sqlite3
  • sqlite3-dev : libsqlite3-dev
  • sqlite3-ruby: libsqlite3-ruby
  • libxmlrpc-c: libxmlrpc-c3-dev, libxmlrpc-c3
  • scons: scons

Hosts

Provisioning hosts need to have installed:

  • ruby >= 1.8.5
  • sudo >= 1.6.9

And, depending on the chosen hypervisor:

Virtualization Technology

Depending on the hypervisor you are planning to use you may need to meet additional requirements. Please check the guides for each virtualizer:

Users Configuration

It is necessary that the ONE Server and all the hosts:

  • have a common user (the ONE admin user <oneadmin>). This can be done using NIS.
  • trust each other in the ssh scope. This means that all the hosts have to trust the public key of the common user (the one administrator). Public keys needs to be generated in the ONE server and then copied into all the cluster hosts so they trust the ONE server without prompting for a password.

    Also, we recommend the use of ssh-agent to keep the private key encrypted in the ONE server. A good tutorial on howto do this easily can be found in here

Cluster Filesystem Layout

The ONE server and the cluster hosts have to share directories using, for example, NFS. The filesystem layout for the cluster has to conform to the definitions below:

  • $ONE_LOCATION : Path to the ONE installation.
  • $ONE_LOCATION/var : Directory containing log files and directories for the different VMs. This directory needs to be shared, exported by the ONE server and mounted by all the cluster hosts. In the ONE server, it corresponds to $ONE_LOCATION/var. If this directory is mounted in the remote hosts in a different point than $ONE_LOCATION/var, you would need to set the VM_RDIR variable in the one configuration file.
  • $ONE_LOCATION/var/<VID> : Home directory for the VM with Identifier=<VID>. The system will create the following files in this location:
    • Log files : Log file of the VM. All the information specific of the VM will be dumped in a file in this directory called vm.log.
    • Deployment description files : Stored in deployment.$EXECUTION, where $EXECUTION is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).
    • Restore files : checkpointing information is also stored in this directory to restore the VM in case of failure. The state information is stored in a file called checkpoint.
  • Also the VM images needs to be accessible from all the cluster hosts and the local root account of all the cluster hosts must have read-write permissions. Once you specify the location of this images in the VM template take into account that they must refer to where this directory is mounted in the remote hosts.

Installation

The ONE software needs to be installed on a machine that needs to export (at least) the $ONE_LOCATION folder using NFS. This is necessary for the checkpointing feature to work. There is an known issue regarding sqlite and NFS, please see the Release Notes for more info.

Follow these simple steps to install the ONE server

  • Download and untar the ONE tarball.
  • Change to the created folder and run scons to compile Open Nebula
scons OPTION=VALUE

the argument expression [OPTIONAL] is used to set non-default paths for :

OPTION VALUE
sqlite path-to-sqlite-install
xmlrpc path-to-xmlrpc-install
parsers yes if you want to rebuild flex/bison files
  • Run the install script
./install.sh <destination_folder>

Platform notes

Ubuntu/Kubuntu 8.04 and 7.10

Not known issues.

Debian Lenny

Not known issues.

MAC OSX 10.4 10.5

Not known issues.

Fedora 8

Not known issues.

CentOS 5

Centos does not come with needed versions of following packages:

  • scons
  • xmlrpc-c
  • sqlite

Here are the instructions on how to install them.

scons

The version that comes with Centos is 0.96 and it is not compatible with our build scripts. To install version 0.98 you can download the RPM at:

http://www.scons.org/download.php

Tested with scons-0.98.5-1.noarch.rpm.

xmlrpc-c

To install xmlrpc-c there is an apt repository with needed packages. You can create a new file in /etc/apt/sources.list.d containing this line:

repomd http://centos.karan.org el5/extras/testing/i386/RPMS

After that you need to update apt database and install these two packages:

$ apt-get install xmlrpc-c xmlrpc-c-devel

sqlite

This package should be installed from source, you can download the tar.gz from http://www.sqlite.org/download.html. It was tested with sqlite 3.5.9.

If you do not install it to a system wide location (/usr or /usr/local) you need to add LD_LIBRARY_PATH and tell scons where to find the files:

$ scons sqlite=<path where you installed sqlite>

ONE Configuration

The configuration file is called oned.conf and it is placed inside the etc directory of the $ONE_LOCATION, which in turn is the directory where OpenNebula is installed.

In this file the next aspects of oned can be defined:

  • HOST_MONITORING_INTERVAL : Time in seconds between host monitorization
  • VM_POLLING_INTERVAL : Time in seconds between virtual machine monitorization
  • VM_RDIR : The remote nodes must have access to $ONE_LOCATION/var, so this must be shared between the ONE server and the remote nodes. If the mount point of $ONE_LOCATION/var has a different path in the remote nodes than in the ONE server, set here the mount point of the _remote_ nodes
  • PORT : Port where oned will listen for xmlrpc calls
  • DEBUG_LEVEL : Sets the level of verbosity of $ONE_LOCATION/var/oned.log log file. Possible values are:
0 ERROR
1 WARNING
2 INFO
3 DEBUG
  • IM_MAD : Information manager configuration. You can define more than one Information manager but make sure it has different names. To define it, there needs to be set:
    • name: name for this information manager.
    • executable: path of the information manager executable, can be an absolute path or a relative path from $ONE_LOCATION
    • arguments: path where the information manager configuration resides, can also be a relative path
    • default: default values and configuration parameters for the driver
  • VM_MAD : Virtual Machine Manager configuration.
    • name: name of the virtual machine manager.
    • executable: path of the virtual machine manager executable, can be an absolute path or a relative path from $ONE_LOCATION
    • type: driver type, supported drivers: xen, kvm or ec2
    • default: file with default values for the driver (for example to set the default Kernel).

An example of a complete oned.conf for a ONE that is going to use the XEN hypervisor is shown below.

# Time in seconds between host monitorization
HOST_MONITORING_INTERVAL=10

# Time in seconds between virtual machine monitorization
VM_POLLING_INTERVAL=10

# Sets the verbosity of $ONE_LOCATION/var/oned.log 
DEBUG=3

# Information manager configuration. 
IM_MAD=[name="im_xen",executable="bin/one_im_ssh",arguments="etc/im_xen/im_xen.conf",default="etc/im_xen/im_xen.conf"]

# Virtual Machine Manager configuration.
VM_MAD=[name="vmm_xen",executable="bin/one_vmm_xen",default="etc/vmm_xen/vmm_xen.conf",type="xen"]

# Port where oned will listen for xmlrpc calls.
PORT=2633

Drivers Configuration

Currently, ONE supports three different set of drivers. In order to configure them, please take a look at the corresponding driver configuration guide:

Drivers are separate processes that communicate with the ONE core using an internal ASCII protocol. Before loading the driver, two run commands (RC) files are sourced to optionally obtain environmental variables and perform tasks described in shell script.

These two RC files are:

  • $ONE_LOCATION/etc/mad/defaultrc. Global environment and tasks for all the drivers. Variables are defined in the following fashion:
  ATTRIBUTE=VALUE

and, upon read, exported to the environment. This attributes are set for all the drivers, and could be superseded by the same attribute present in the driver's own specific RC file. Common attributes suitable to be set for all the drivers are:

# Debug for MADs. 
# If set, MADs will generate cores and logs in $ONE_LOCATION/var. 
# Possible values are [0=ERROR, 1=DEBUG]
ONE_MAD_DEBUG=

# Nice Priority to run the drivers
PRIORITY=19

The only out-of-the-box default value set in this file is the PRIORITY, and as seen above, is set to 19.

  • Specific file for each driver. Please see each driver's configuration guide for file location and specific options.

Environment Configuration

In order to use Open Nebula, you need to set the following variables:

ONE_LOCATION pointing to <destination_folder>
ONE_XMLRPC http://localhost:2633/RPC2
PATH $ONE_LOCATION/bin:$PATH

Schedule Module

The Scheduler module is in charge of the assignment between pending Virtual Machines and known Hosts. The ONE architecture defines this module as a separate process that can be started independently of oned. The ONE scheduling framework is designed in a generic way, so it is highly modifiable. ONE comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy.

:!: You can start oned without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm command.

:!: The Haizea lease manager can also be used as a scheduling module in OpenNebula. Haizea allows OpenNebula to support advance reservation of resources and queuing of best effort requests (more generally, it allows you to lease your resources as VMs, with a variety of lease terms). The Haizea documentation includes a guide on how to use OpenNebula and Haizea to manage VMs on a cluster

Rank Scheduling Policy

The goal of this policy is to prioritize those resources more suitable for the VM. First those hosts that do not meet the VM requirements (see the ''REQUIREMENTS'' attribute) and do not have enough resources (available CPU and memory) to run the VM are filtered out. The ''RANK'' expression is evaluated upon this list to sort it. Those resources with a higher rank are used first to allocate VMs.

:!: Rank and requirement expressions are build using any of the attributes provided by the IM (e.g. ARCH, FREECPU…). Check the IM Driver Configuration Guide to extend the ONE information model.

:!: Note that there is a difference between

  • Free CPU : Is the free physical CPU as reported by the monitoring process executed by the Information Manager.
  • Available CPU : More of a logical concept, is the total physical CPU minus the reserved CPU for the running VMs, the latter which can be significantly lower than the actual CPU consumed by the VM.

Setting up the cluster

Once the environment is correctly set up, we have to let ONE know about which resources it can use. In other words, we have to set up the cluster.

But firsts things first, we need to start the ONE daemon and the scheduler. You can do them both by issuing the following command as the <oneadmin> user:

$> one start

Now we should have running two process:

  • oned : Core process, attends the CLI requests, manages the pools and all the components
  • mm_sched : Scheduler process, in charge of the VM-HOST matching

If those process are running, you should see content in their log files:

  • $ONE_LOCATION/var/oned.log
  • $ONE_LOCATION/var/sched.log

Once we made sure that both processes are running, let's set up the cluster in ONE. First thing is adding hosts to ONE. This can be done by means of the onehost command (See the User Guide for more information). So let's add one host:

$> onehost add host1.mydomain.org im_xen vmm_xen

We are giving ONE hints about what it needs in order to run VMs in our cluster hosts. More details about it in the Command Line Interface.

Verifying ONE Installation

Once the ONE software is installed, the next tree should be found under $ONE_LOCATION:

To verify the installation, we recommend to follow the steps in the QuickStart guide, from this step onwards. Before tacking it, please make sure that your environment is correctly set.

Logging

There are different log files corresponding to different ONE components:

  • ONE Daemon: The core component of ONE dumps all its logging information onto $ONE_LOCATION/var/oned.log. All problems related with DB access, component communication, command line invocations and so on will be stated here. Also, in this file will be stored all the information that is not specific to the Scheduler or the VM life-cycle. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf.
  • Scheduler: All the scheduler information is collected into the $ONE_LOCATION/var/sched.log file. This information is formed by error messages and also information about the scheduling process is dumped.
  • Virtual Machines: All VMs controlled by ONE have their folder, corresponding to their ID (VID). This folder would be $ONE_LOCATION/var/<VID>/vm.log. Information related to this VM would be dumped into this file.
  • Drivers: Each driver can have activated its ONE_MAD_DEBUG variable in their RC files (see the Drivers configuration section for more details). If so, log information will be dumped to $ONE_LOCATION/var/name-of-the-driver-executable.log.