Basic Configuration 2.2

OpenNebula Components

OpenNebula comprises the execution of three type of processes:

  • The OpenNebula daemon (oned), to orchestrate the operation of all the modules and control the VM's life-cycle
  • The drivers to access specific cluster systems (e.g. storage or hypervisors)
  • The scheduler to take VM placement decisions

high level architecture of cluster, its  components and relationship

In this section you'll learn how to configure and start these services.

OpenNebula Daemon

The configuration file for the daemon is called oned.conf and it is placed inside the $ONE_LOCATION/etc directory (or in /etc/one if OpenNebula was installed system wide).

A detailed description of all the configuration options for the OpenNebula daemon can be found in the oned.conf reference document

The oned.conf file consists in the following sections:

  • General configuration attributes, like the time between cluster nodes and VM monitorization actions or the MAC prefix to be used. See more details...
  • Information Drivers, the specific adaptors that will be used to monitor cluster nodes. See more details...
  • Virtualization Drivers, the adaptors that will be used to interface the hypervisors. See more details...
  • Transfer Drivers, that are used to interface with the storage system to clone, delete or move VM images. See more details...
  • Image Repository, used to store images for virtual machines. See more details...
  • Hooks, that are executed on specific events, e.g. VM creation. See more details...

The following example will configure OpenNebula to work with KVM and a shared FS:

# Attributes
HOST_MONITORING_INTERVAL = 60
VM_POLLING_INTERVAL      = 60

# VM_DIR = /srv/cloud/one/var #Path in the cluster nodes to store VM images

SCRIPTS_REMOTE_DIR=/tmp/one
DB = [ backend = "sqlite" ]
VNC_BASE_PORT = 5000

NETWORK_SIZE = 254     #default
MAC_PREFIX   = "00:03"

DEFAULT_IMAGE_TYPE    = "OS"
DEFAULT_DEVICE_PREFIX = "hd"

#Drivers
IM_MAD = [name="im_kvm", executable="one_im_ssh", arguments="kvm"]
VM_MAD = [name="vmm_kvm", executable="one_vmm_sh", arguments="kvm,
          default="vmm_sh/vmm_sh_kvm.conf", type= "kvm" ]                
TM_MAD = [name="tm_nfs", executable="one_tm", arguments="tm_nfs/tm_nfs.conf" ]

:!: The VM_DIR variable is commented by default. This variable is not needed if OpenNebula's VAR directory is mounted in the same path in the front-end and the cluster nodes.

If you need to mount this directory in a different location for the cluster Nodes, then make sure this variable is set to that location. Please keep in mind that this path is only for the cluster Nodes, the front-end will keep using the VAR location.

Scheduler

The Scheduler module is in charge of the assignment between pending Virtual Machines and cluster nodes. OpenNebula's architecture defines this module as a separate process that can be started independently of oned. OpenNebula comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy.

The goal of this policy is to prioritize those resources more suitable for the VM. You can configure several resource and load aware policies by simply specifying specific RANK expressions in the Virtual Machine definition files. Check the scheduling guide to learn how to configure the scheduler and make use of these policies.

You can use OpenNebula without the scheduling process to operate it in a VM management mode. Start or migration of VMs in this case is explicitly performed using the onevm command.

The Haizea lease manager can also be used as a scheduling module in OpenNebula. Haizea allows OpenNebula to support advance reservation of resources and queuing of best effort requests.

Drivers

Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading a driver, two run commands (RC) files are sourced to optionally obtain environmental variables.

These two RC files are:

  • $ONE_LOCATION/etc/defaultrc. Global environment and tasks for all the drivers. Variables are defined using sh syntax, and upon read, exported to the driver's environment:
# Debug for MADs [0=ERROR, 1=DEBUG] 
# If set, MADs will generate cores and logs in $ONE_LOCATION/var. 
ONE_MAD_DEBUG=
# Nice Priority to run the drivers
PRIORITY=19

Start & Stop OpenNebula

:!: When you execute OpenNebula for the first time it will create an administration account. Be sure to put the user and password in a single line as user:password in the $ONE_AUTH file.

The OpenNebula daemon and the scheduler can be easily started with the $ONE_LOCATION/bin/one script. Just execute as the <oneadmin> user: <xterm> $ one start </xterm>

OpenNebula by default truncates older logs. If you want to backup OpenNebula's main log, you may supply the -b option to automatically back it up: <xterm> $ one -b start </xterm>

If you do not want to start the scheduler just use oned, check oned -h for options.

Now we should have running two process:

  • oned : Core process, attends the CLI requests, manages the pools and all the components
  • mm_sched : Scheduler process, in charge of the VM to cluster node matching

If those process are running, you should see content in their log files (log files are placed in /var/log/one/ if you installed OpenNebula system wide):

  • $ONE_LOCATION/var/oned.log
  • $ONE_LOCATION/var/sched.log

OpenNebula Users

There are two account types in the OpenNebula system:

  • The oneadmin account is created the first time OpenNebula is started using the ONE_AUTH data, see below. oneadmin has enough privileges to perform any operation on any object (virtual machine, network, host or user)
  • Regular user accounts must be created by <oneadmin> and they can only manage their own objects (images, virtual machines and networks), and use public objects from other users.

OpenNebula users should have the following environment variables set:

ONE_AUTH Needs to point to a file containing just a single line stating “username:password”. If ONE_AUTH is not defined, $HOME/.one/one_auth will be used instead. If no auth file is present, OpenNebula cannot work properly, as this is needed by the core, the CLI, and the cloud components as well.
ONE_LOCATION If OpenNebula was installed in self-contained mode, this variable must be set to <destination_folder>. Otherwise, in system wide mode, this variable must be unset. More info on installation modes can be found here
ONE_XMLRPC http://localhost:2633/RPC2
PATH $ONE_LOCATION/bin:$PATH if self-contained. Otherwise this is not needed.

Adding and Deleting Users

User accounts within the OpenNebula system are managed by <oneadmin> with the oneuser utility. Users can be easily added to the system like this: <xterm> $ oneuser create helen mypass </xterm> In this case user helen should include the following content in the $ONE_AUTH file: <xterm> $ export ONE_AUTH=“/home/helen/.one/one_auth” $ cat $ONE_AUTH helen:mypass </xterm>

Users can be deleted by simply: <xterm> $ oneuser delete john </xterm>

To list the users in the system just issue the command: <xterm>

oneuser list

UID NAME PASSWORD ENABLE

 0 oneadmin        c24783ba96a35464632a624d9f829136edc0175e             True
 1 paul            e727d1464ae12436e899a726da5b2f11d8381b26             True
 2 helen           34a91f713808846ade4a71577dc7963631ebae14             True

</xterm>

Detailed information of the oneuser utility can be found in the Command Line Reference

OpenNebula Hosts

Finally, the physical nodes have to be added to the system as OpenNebula hosts. Hosts can be added anytime with the onehost utility, like this:

<xterm> $ onehost create host01 im_kvm vmm_kvm tm_nfs $ onehost create host02 im_kvm vmm_kvm tm_nfs </xterm>

Before adding a host check that oneadmin can ssh to it without being prompted for a password

Physical host monitoring is performed by probes, which are scripts designed to extract pieces of information from the host operating system. This scripts are copied to SCRIPTS_REMOTE_DIR (set in $ONE_LOCATION/etc/oned.conf) in the remote executing nodes upon host addition to the system (ie, upon “onehost create”), and they will be copied again if any probe is removed or added in the front-end ($ONE_LOCATION/lib/remotes). If any script is modified, or for any reason the administrator wants to force the probes to be copied again on a particular host, the “onehost sync” functionality can be used.

You can find a complete explanation in the guide for managing physical hosts and clusters, and the Command Line Interface reference.

Logging and Debugging

There are different log files corresponding to different OpenNebula components:

  • ONE Daemon: The core component of OpenNebula dumps all its logging information onto $ONE_LOCATION/var/oned.log. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf.
  • Scheduler: All the scheduler information is collected into the $ONE_LOCATION/var/sched.log file.
  • Virtual Machines: All VMs controlled by OpenNebula have their folder, $ONE_LOCATION/var/<VID>/ (or /var/lib/one/<VID> in a system wide installation). You can find the following information in it:
    • Log file : The information specific of the VM will be dumped in a file in this directory called vm.log. Note: These files are in /var/log/one if OpenNebula was installed system wide.
    • Deployment description files : Stored in deployment.<EXECUTION>, where <EXECUTION> is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).
    • Transfer description files : Stored in transfer.<EXECUTION>.<OPERATION>, where <EXECUTION> is the sequence number in the execution history of the VM, <OPERATION> is the stage where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.
    • Save images: Stored in images/ sub-directory, images are in the form disk.<id>.
    • Restore files : check-pointing information is also stored in this directory to restore the VM in case of failure. The state information is stored in a file called checkpoint.
  • Drivers: Each driver can have activated its ONE_MAD_DEBUG variable in their RC files (see the Drivers configuration section for more details). If so, error information will be dumped to $ONE_LOCATION/var/name-of-the-driver-executable.log; log information of the drivers is in oned.log.