Xen Driver Guide 1.2

The XEN hypervisor offers a powerful, efficient and secure feature set for virtualization of x86, IA64, PowerPC and other CPU architectures. It delivers both paravirtualization and full virtualization. This guide describes the use of Xen with OpenNebula, please refer to the Xen specific documentation for further information on the setup of the Xen hypervisor itself.

Xen Configuration

The cluster nodes must have a working installation of Xen that includes a Xen aware kernel running in Dom0 and the Xen utilities. In each cluster node you must perform the following steps to get the driver running:

  • The remote hosts must have the xend daemon running (/etc/init.d/xend) and a XEN aware kernel running in Dom0
  • The <oneadmin> user may need to execute Xen commands using root privileges. This can be done by adding this two lines to the sudoers file of the cluster nodes so <oneadmin> user can execute Xen commands as root (change paths to suit your installation):
%xen    ALL=(ALL) NOPASSWD: /usr/sbin/xm *
%xen    ALL=(ALL) NOPASSWD: /usr/sbin/xentop *
  • You may also want to configure network for the virtual machines. OpenNebula assumes that the VMs have network access through standard bridging, please refer to the Xen documentation to configure the network for your site.
  • Some distributions have requiretty option enabled in the sudoers file. It must be disabled to so ONE can execute commands using sudo. The line to comment out is this one:
Defaults requiretty

Driver Files

The driver consists of the following files:

  • $ONE_LOCATION/lib/mads/one_vmm_xen : Shell script wrapper to the driver itself. Sets the environment and other bootstrap tasks.
  • $ONE_LOCATION/lib/mads/one_vmm_xen.rb : The actual XEN driver.
  • $ONE_LOCATION/etc/vmm_xen/vmm_xenrc : environment setup and bootstrap instructions
  • $ONE_LOCATION/etc/vmm_xen/vmm_xen.conf : set here default values for XEN domain definitions.

Note: If OpenNebula was installed in system wide mode these directories become /usr/lib/one/mads and /etc/one/, respectively. The rest of this guide refers to the $ONE_LOCATION paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here

Configuration

OpenNebula Configuration

OpenNebula needs to know if it is going to use the XEN Driver. To achieve this, two lines have to be placed within $ONE_LOCATION/etc/oned.conf, one for the VM driver and other for the IM driver:

    IM_MAD = [
        name       = "im_xen",
        executable = "one_im_ssh",
        arguments  = "im_xen/im_xen.conf",
        default    = "im_xen/im_xen.conf" ]
	
    VM_MAD = [
        name       = "vmm_xen",
        executable = "one_vmm_xen",
        default    = "vmm_xen/vmm_xen.conf",
        type       = "xen" ]
  • The name of the driver needs to be provided at the time of adding a new cluster node to OpenNebula.
  • executable points to the path of the driver executable file. It can be an absolute path or relative to $ONE_LOCATION/lib/mads.
  • The default points to the configuration file for the driver (see below). It can be an absolute path or relative to $ONE_LOCATION/etc.
  • type identifies this driver as a XEN driver.

Driver Configuration

The driver uses two configuration files, by default placed in $ONE_LOCATION/etc/vmm_xen:

  • Defaults file, specified in the default attribute in the driver specification line in oned.conf, usually$ONE_LOCATION/etc/vmm_xen/vmm_xen.conf. This file is home for default values for domain definitions (in other words, OpenNebula templates). Let's go for a more concrete and VM related example. If the user wants to set a default value for CPU requirements for all of their XEN domain definitions, simply edit the vmm_xenrc file and set a
  CPU=0.6

into it. Now, when defining a ONE template to be sent to a XEN resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the XEN-specific attributes, that is, attributes mandatory in the XEN driver that are not mandatory for other hypervisors. Non mandatory attributes for XEN but specific to them are also recommended to have a default.

  • Run commands file, the $ONE_LOCATION/etc/vmm_xen/vmm_xenrc file contains environment variables for the driver. You may need to tune the values for XENTOP_PATH and XM_PATH, if any of either /usr/sbin/xentop and /usr/sbin/xm do not live in their default locations in the remote hosts. This file can also hold instructions to be executed before the actual driver load to perform specific tasks or to pass environmental variables to the driver. The syntax used for the former is plain shell script that will be evaluated before the driver execution. For the latter, the syntax is the familiar:
  ENVIRONMENT_VARIABLE=VALUE

Xen Specific Template Attributes

The following are template attributes specific to Xen, please refer to the OpenNebula user guide for a complete list of the attributes supported to define a VM.

Optional Attributes

  • CREDIT : Xen comes with a credit scheduler. The credit scheduler is a proportional fair share CPU scheduler built from the ground up to be work conserving on SMP hosts. This attribute sets a 16 bit value that will represent the amount of sharing this VM will have respect to the others living in the same host. This value is set into the driver configuration file, is not intended to be defined per domain.

Additional Tunning

The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to Xen. Basically, everything placed here will be written ad literally into the Xen deployment file.

  RAW = [ type="xen", data="on_crash=destroy" ]

Testing

In order to test the Xen driver, the following template can be instantiated with appropriate values and sent to a Xen resource:

CPU      = 1
MEMORY   = 128
OS       = [kernel="/path-to-kernel",initrd= "/path-to-initrd",root="sda1" ]
DISK     = [source="/path-to-image-file",target="sda",readonly="no"]
NIC      = [mac="xx.xx.xx.xx.xx.xx", bridg="eth0"]
GRAPHICS = [type="vnc",listen="127.0.0.1",port="5900"]