The XEN hypervisor offers a powerful, efficient and secure feature set for virtualization of x86, IA64, PowerPC and other CPU architectures. It delivers both paravirtualization and full virtualization.
To gather and install the available software on a AMD 64 Ubuntu system (it even modifies grid to automatically start the xen kernel):
# sudo apt-get install linux-image-2.6-xen-amd64 xen-hypervisor-3.0.3-1-amd64 xen-tools linux-headers-2.6-xen-amd64 xen-linux-system-2.6.18-4-xen-amd64 bridge-utils
Steps in the remote host to get this driver running:
/etc/init.d/xend/
) and a XEN
aware kernel running in Dom0<oneadmin>
user may need to execute xen commands using root privileges and also letting root user to read and write xen image files. This can be done creating a group containing <oneadmin>
and root and setting appropriate group ownership and permissions to xen image files. You also need to add this two lines to the sudoers file of the executing nodes so <oneadmin>
user can execute xen commands as root (change paths to suit your installation): %xen ALL=(ALL) NOPASSWD: /usr/sbin/xm * %xen ALL=(ALL) NOPASSWD: /usr/sbin/xentop *
/etc/xen/xend-config.sxp
set the next lines:(network-script 'network-bridge netdev=eth0')
and to get live migration
(xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-hosts-allow 'your.host.here')
Relative to $ONE_LOCATION
:
bin/one_vmm_xen
: Shell script wrapper to the driver itself. Sets the environment and other bootstrap tasks.bin/one_vmm_xen.rb
: The actual XEN driver.etc/vmm_xen/vmm_xenrc
: environment setup and bootstrap instructions etc/vmm_xen/vmm_xen.conf
: set here default values for XEN domain definitions.
ONE needs to know if it is going to use the XEN Driver. To achieve this, two lines have to be placed within /etc/oned.conf
, one for the VM driver and other for the IM driver:
IM_MAD = [ name = "im_xen", executable = "bin/one_im_ssh", arguments = "etc/im_xen/im_xen.conf", default = "etc/im_xen/im_xen.conf" ]
VM_MAD = [ name = "vmm_xen", executable = "bin/one_vmm_xen", default = "etc/vmm_xen/vmm_xen.conf", type = "xen" ]
The driver uses two configuration files, by default placed in $ONE_LOCATION/etc/vmm_xen
:
$ONE_LOCATION/etc/vmm_xen/vmm_xen.conf
, or wherever stated in the oned.conf
confiuration file, the default attribute in the driver specification line. This file is home for default values for domain definitions (in other words, ONE Templates). $ONE_LOCATION/etc/vmm_xen/vmm_xenrc
, contains environment variables for the driver. You may need to tune the values for XENTOP_PATH
and XM_PATH
, if any of either /usr/sbin/xentop
and /usr/sbin/xm
do not live in their default locations in the remote hosts. This file can also hold instructions to be executed before the actual driver load to perform specific tasks or to pass environmental variables to the driver. The syntax used for the former is plain shell script that will be evaluated before the driver execution. For the latter, the syntax is the familiar:ENVIRONMENT_VARIABLE=VALUE
Let's go for a more concrete and VM related example. If the user wants to set a default value for CPU requirements for all of their XEN domain definitions, simply edit the vmm_xenrc
file and set a
CPU=0.6
into it. Now, when defining a ONE template to be sent to a XEN resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.
It is generally a good idea to place defaults for the XEN-specific attributes, that is, attributes mandatory in the XEN driver that are not mandatory for other hypervisors. Non mandatory attributes for XEN but specific to them are also recommended to have a default.
RAW = [ type="xen", data="on_crash=destroy" ]
In order to test the XEN driver, the following template can be instantiated with appropriate values and sent to a XEN resource:
CPU = 0.5 MEMORY = 128 OS = [kernel="/path-to-kernel",initrd= "/path-to-initrd",root="sda1" ] DISK = [source="/path-to-image-file",target="sda",readonly="no"] NIC = [mac="xx.xx.xx.xx.xx.xx", bridg="eth0"] GRAPHICS = [type="vnc",listen="127.0.0.1",port="5900"]