Xen Driver 3.0

The XEN hypervisor offers a powerful, efficient and secure feature set for virtualization of x86, IA64, PowerPC and other CPU architectures. It delivers both paravirtualization and full virtualization. This guide describes the use of Xen with OpenNebula, please refer to the Xen specific documentation for further information on the setup of the Xen hypervisor itself.

inlinetoc

Requirements

The Hosts must have a working installation of Xen that includes a Xen aware kernel running in Dom0 and the Xen utilities.

Considerations & Limitations

  • Xen HVM currently only supports 4 IDE devices. You have to take this into account when adding disks.
    • Contextualization ISO uses hdc.
    • Datablock images use hde, disk TARGET should be set manually.
    • For more disk devices you should better use SCSI.
  • OpenNebula does not manage kernels nor init images. If you specify kernel/initrd images make sure that those files (paths) are accessible from all the physical hosts.
    • Copy the files beforehand to the same place in the remote nodes
    • Place the files in a shared directory mounted at the same path in all the remote nodes

Known Bugs

  • In Debian the package xen-utils-common version 4.0.0-1 has a bug. Update to a newer version.

Configuration

Xen Configuration

The xen packages on Debian Lenny seem to be broken, and they don't work with the tap:aio interface. Sander Klous in the mailing proposes the following workaround:

<xterm> # ln -s /usr/lib/xen-3.2-1/bin/tapdisk /usr/sbin # echo xenblktap >> /etc/modules # reboot </xterm>

In each Host you must perform the following steps to get the driver running:

  • The remote hosts must have the xend daemon running (/etc/init.d/xend) and a XEN aware kernel running in Dom0
  • The <oneadmin> user may need to execute Xen commands using root privileges. This can be done by adding this two lines to the sudoers file of the hosts so <oneadmin> user can execute Xen commands as root (change paths to suit your installation):
%xen    ALL=(ALL) NOPASSWD: /usr/sbin/xm *
%xen    ALL=(ALL) NOPASSWD: /usr/sbin/xentop *
  • You may also want to configure network for the virtual machines. OpenNebula assumes that the VMs have network access through standard bridging, please refer to the Xen documentation to configure the network for your site.
  • Some distributions have requiretty option enabled in the sudoers file. It must be disabled to so ONE can execute commands using sudo. The line to remove or comment out (by placing a # at the beginning of the line) is this one:
#Defaults requiretty

OpenNebula Configuration

OpenNebula needs to know if it is going to use the XEN Driver. To achieve this, uncomment these drivers in /etc/one/oned.conf:

    IM_MAD = [
        name       = "im_xen",
        executable = "one_im_ssh",
        arguments  = "xen" ]

    VM_MAD = [
        name       = "vmm_xen",
        executable = "one_vmm_exec",
        arguments  = "xen",
        default    = "vmm_exec/vmm_exec_xen.conf",
        type       = "xen" ]

Usage

The following are template attributes specific to Xen, please refer to the template reference documentation for a complete list of the attributes supported to define a VM.

Optional Attributes

CREDIT

Xen comes with a credit scheduler. The credit scheduler is a proportional fair share CPU scheduler built from the ground up to be work conserving on SMP hosts. This attribute sets a 16 bit value that will represent the amount of sharing this VM will have respect to the others living in the same host. This value is set into the driver configuration file, is not intended to be defined per domain.

DISK

  • type, This attribute defines the Xen backend for disk images, possible values are file:, tap:aio:… Note the trailing :.

NIC

  • model, This attribute defines the type of the vif. This corresponds to the type attribute of a vif, possible values are ioemu, netfront
  • ip, This attribute defines the ip of the vif and can be used to set antispoofing rules. For example if you want to use antispoofing with network-bridge, you will have to add this line to /etc/xen/xend-config.sxp:
    (network-script 'network-bridge antispoofing=yes')

OS

  • bootloader, You can use this attribute to point to your pygrub loader. This way you wont need to specify the kernel/initrd and it will use the internal one. Make use the kerne inside is domU compatible if using paravirtualization.

Additional Attributes

The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to Xen. Basically, everything placed here will be written ad literally into the Xen deployment file.

  RAW = [ type="xen", data="on_crash=destroy" ]

Tuning & Extending

The driver consists of the following files:

  • /usr/lib/one/mads/one_vmm_exec : generic VMM driver.
  • /var/lib/one/remotes/vmm/xen : commands executed to perform actions.

And the following driver configuration files:

  • /etc/one/vmm_exec/vmm_exec_xen.conf : This file is home for default values for domain definitions (in other words, OpenNebula templates). Let's go for a more concrete and VM related example. If the user wants to set a default value for CPU requirements for all of their XEN domain definitions, simply edit the vmm_exec_xen.conf file and set a
  CPU=0.6

into it. Now, when defining a ONE template to be sent to a XEN resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the XEN-specific attributes, that is, attributes mandatory in the XEN driver that are not mandatory for other hypervisors. Non mandatory attributes for XEN but specific to them are also recommended to have a default.

  • /var/lib/one/remotes/vmm/xen/xenrc : This file contains environment variables for the driver. You may need to tune the values for XM_PATH, if /usr/sbin/xm do not live in their default locations in the remote hosts. This file can also hold instructions to be executed before the actual driver load to perform specific tasks or to pass environmental variables to the driver. The syntax used for the former is plain shell script that will be evaluated before the driver execution. For the latter, the syntax is the familiar:
  ENVIRONMENT_VARIABLE=VALUE

See the Virtual Machine drivers reference for more information.