Configuration Guide 1.4

OpenNebula Components

OpenNebula comprises the execution of three type of processes:

  • The OpenNebula daemon (oned), to orchestrate the operation of all the modules and control the VM's life-cycle
  • The drivers to access specific cluster systems (e.g. storage or hypervisors)
  • The scheduler to take VM placement decisions

high level architecture of cluster, its  components and relationship

In this section you'll learn how to configure and start these services.

OpenNebula Daemon

The configuration file for the daemon is called oned.conf and it is placed inside the $ONE_LOCATION/etc directory (or in /etc/one if OpenNebula was installed system wide).

A detailed description of all the configuration options for the OpenNebula daemon can be found in the oned.conf reference document

The oned.conf file consists in the following sections:

  • General configuration attributes, like the time between cluster nodes and VM monitorization actions or the MAC prefix to be used. See more details...
  • Information Drivers, the specific adaptors that will be used to monitor cluster nodes. See more details...
  • Virtualization Drivers, the adaptors that will be used to interface the hypervisors. See more details...
  • Transfer Drivers, that are used to interface with the storage system to clone, delete or move VM images. See more details...
  • Hooks, that are executed on specific events, e.g. VM creation. See more details...

The following example will configure OpenNebula to work with KVM and a shared FS:

# Attributes
HOST_MONITORING_INTERVAL = 60
VM_POLLING_INTERVAL      = 60

VM_DIR = /srv/cloud/one/var #Path in the cluster nodes to store VM images 

NETWORK_SIZE = 254     #default
MAC_PREFIX   = "00:03"

#Drivers
IM_MAD = [name="im_kvm", executable="one_im_ssh", arguments="im_kvm/im_kvm.conf"]
VM_MAD = [name="vmm_kvm",executable="one_vmm_kvm", 
          default="vmm_kvm/vmm_kvm.conf", type= "kvm" ]                
TM_MAD = [name="tm_nfs", executable="one_tm", arguments="tm_nfs/tm_nfs.conf" ]

:!: Be sure that VM_DIR is set to the path where the front-end's $ONE_LOCATION/var directory is mounted in the cluster nodes.

Scheduler

The Scheduler module is in charge of the assignment between pending Virtual Machines and cluster nodes. OpenNebula's architecture defines this module as a separate process that can be started independently of oned. OpenNebula comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy.

The goal of this policy is to prioritize those resources more suitable for the VM. You can configure several resource and load aware policies by simply specifying specific RANK expressions in the Virtual Machine definition files. Check the scheduling guide to learn how to configure these policies.

You can use OpenNebula without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm command.

The Haizea lease manager can also be used as a scheduling module in OpenNebula. Haizea allows OpenNebula to support advance reservation of resources and queuing of best effort requests.

Drivers

Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading a driver, two run commands (RC) files are sourced to optionally obtain environmental variables.

These two RC files are:

  • $ONE_LOCATION/etc/mad/defaultrc. Global environment and tasks for all the drivers. Variables are defined using sh syntax, and upon read, exported to the driver's environment:
# Debug for MADs [0=ERROR, 1=DEBUG] 
# If set, MADs will generate cores and logs in $ONE_LOCATION/var. 
ONE_MAD_DEBUG=
# Nice Priority to run the drivers
PRIORITY=19

Start & Stop OpenNebula

:!: When you execute OpenNebula for the first time it will create an administration account. Be sure to put the user and password in a single line as user:password in the $ONE_AUTH file.

The OpenNebula daemon and the scheduler can be easily started with the $ONE_LOCATION/bin/one script. Just execute as the <oneadmin> user: <xterm> $ one start </xterm> If you do not want to start the scheduler just use oned, check oned -h for options.

Now we should have running two process:

  • oned : Core process, attends the CLI requests, manages the pools and all the components
  • mm_sched : Scheduler process, in charge of the VM to cluster node matching

If those process are running, you should see content in their log files (log files are placed in /var/log/one/ if you installed OpenNebula system wide):

  • $ONE_LOCATION/var/oned.log
  • $ONE_LOCATION/var/sched.log

OpenNebula Users

There are two account types in the OpenNebula system:

  • The oneadmin account is created the first time OpenNebula is started using the ONE_AUTH data, see below. oneadmin has enough privileges to perform any operation on any object (virtual machine, network, host or user)
  • Regular user accounts must be created by <oneadmin> and they can only manage their own objects (virtual machines and networks)

:!: Virtual Networks created by oneadmin are public and can be used by every other user.

OpenNebula users should have the following environment variables set:

ONE_AUTH Needs to point to a file containing just a single line stating “username:password”. If ONE_AUTH is not defined, $HOME/.one/one_auth will be used instead. If no auth file is present, OpenNebula cannot work properly, as this is needed by the core, the CLI, and the cloud components as well.
ONE_LOCATION If OpenNebula was installed in self-contained mode, this variable must be set to <destination_folder>. Otherwise, in system wide mode, this variable must be unset. More info on installation modes can be found here
ONE_XMLRPC http://localhost:2633/RPC2
PATH $ONE_LOCATION/bin:$PATH if self-contained. Otherwise this is not needed.

Adding and Deleting Users

User accounts within the OpenNebula system are managed by <oneadmin> with the oneuser utility. Users can be easily added to the system like this: <xterm> $ oneuser create helen mypass </xterm> In this case user helen should include the following content in the $ONE_AUTH file: <xterm> $ export ONE_AUTH=“/home/helen/.one/one_auth” $ cat $ONE_AUTH helen:mypass </xterm>

Users can be deleted by simply: <xterm> $ oneuser delete john </xterm>

To list the users in the system just issue the command: <xterm>

oneuser list

UID NAME PASSWORD ENABLE

 0 oneadmin        c24783ba96a35464632a624d9f829136edc0175e             True
 1 paul            e727d1464ae12436e899a726da5b2f11d8381b26             True
 2 helen           34a91f713808846ade4a71577dc7963631ebae14             True

</xterm>

Detailed information of the oneuser utility can be found in the Command Line Reference

OpenNebula Hosts

Finally to set up the cluster, the nodes have to be added to the system as OpenNebula hosts. You need the following information:

  • Hostname of the cluster node or IP
  • Information Driver to be used to monitor the host, e.g. im_kvm.
  • Storage Driver to clone, delete, move or copy images into the host, e.g. tm_nfs.
  • Virtualization Driver to boot, stop, resume or migrate VMs in the host, e.g. vmm_kvm.

Before adding a host check that you can ssh to it without being prompt for a password

Adding and Deleting Hosts

Hosts can be added to the system anytime with the onehost utility. You can add the cluster nodes to be used by OpenNebula, like this: <xterm> $ onehost create host01 im_kvm vmm_kvm tm_nfs $ onehost create host02 im_kvm vmm_kvm tm_nfs </xterm>

The status of the cluster can be check with the list command: <xterm> $ onehost list HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT

 0 host01                      2    100     90     90  523264  205824   on
 1 host02                      7    100     99     99  523264  301056   on
 2 host03                      0    100     99     99  523264  264192  off

</xterm> And specific information about a host with show: <xterm> $ onehost show host01 HOST 0 INFORMATION ID : 0 NAME : host01 STATE : MONITORED IM_MAD : im_kvm VM_MAD : vmm_kvm TM_MAD : tm_nfs

HOST SHARES MAX MEM : 523264 USED MEM (REAL) : 317440 USED MEM (ALLOCATED) : 131072 MAX CPU : 100 USED CPU (REAL) : 10 USED CPU (ALLOCATED) : 20 RUNNING VMS : 2

MONITORING INFORMATION ARCH=i686 CPUSPEED=1995 FREECPU=90 FREEMEMORY=205824 HOSTNAME=host01 HYPERVISOR=xen MODELNAME=Intel(R) Xeon(R) CPU L5335 @ 2.00GHz NETRX=0 NETTX=0 TOTALCPU=100 TOTALMEMORY=523264 USEDCPU=10 USEDMEMORY=317440 </xterm>

If you want not to use a given host you can temporarily disable it: <xterm> $ onehost disable host01 </xterm> A disable host should be listed with STAT off by onehost list. You can also remove a host permanently with: <xterm> $ onehost delete host01 </xterm>

Detailed information of the onehost utility can be found in the Command Line Reference

Logging and Debugging

There are different log files corresponding to different OpenNebula components:

  • ONE Daemon: The core component of OpenNebula dumps all its logging information onto $ONE_LOCATION/var/oned.log. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf.
  • Scheduler: All the scheduler information is collected into the $ONE_LOCATION/var/sched.log file.
  • Virtual Machines: All VMs controlled by OpenNebula have their folder, $ONE_LOCATION/var/<VID>/ (or /var/lib/one/<VID> in a system wide installation). You can find the following information in it:
    • Log file : The information specific of the VM will be dumped in a file in this directory called vm.log. Note: These files are in /var/log/one if OpenNebula was installed system wide.
    • Deployment description files : Stored in deployment.<EXECUTION>, where <EXECUTION> is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).
    • Transfer description files : Stored in transfer.<EXECUTION>.<OPERATION>, where <EXECUTION> is the sequence number in the execution history of the VM, <OPERATION> is the stage where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.
    • Save images: Stored in images/ sub-directory, images are in the form disk.<id>.
    • Restore files : check-pointing information is also stored in this directory to restore the VM in case of failure. The state information is stored in a file called checkpoint.
  • Drivers: Each driver can have activated its ONE_MAD_DEBUG variable in their RC files (see the Drivers configuration section for more details). If so, error information will be dumped to $ONE_LOCATION/var/name-of-the-driver-executable.log; log information of the drivers is in oned.log.