KVM Driver 2.2
KVM (Kernel-based Virtual Machine) is a complete virtualization technique for Linux. It offers full virtualization, where each Virtual Machine interacts with its own virtualized hardware. This guide describes the use of the KVM virtualizer with OpenNebula, please refer to KVM specific documentation for further information on the setup of the KVM hypervisor itself.
The cluster nodes must have a working installation of KVM, that usually requires:
OpenNebula uses the libvirt interface to interact with KVM, so the following steps are required in the cluster nodes to get the KVM driver running:
<oneadmin>
) has to pertain to the <libvirtd> and <kvm> groups in order to use the deaemon and be able to launch VMs.OpenNebula uses libvirt's migration capabilities. More precisely, it uses the TCP protocol offered by libvirt. In order to configure the physical nodes, the following files have to be modified:
/etc/libvirt/libvirtd.conf
: Uncomment “listen_tcp = 1”. Security configuration is left to the admin's choice, file is full of useful comments to achieve a correct configuration./etc/default/libvirt-bin
: add -l option to libvirtd_opts
The driver consists of the following files:
$ONE_LOCATION/lib/mads/one_vmm_sh
: generic VMM driver$ONE_LOCATION/etc/vmm_sh/vmm_shrc
: environment setup and bootstrap instructions $ONE_LOCATION/etc/vmm_sh/vmm_sh_kvm.conf
: set here default values for KVM domain definitions.$ONE_LOCATION/lib/remotes/vmm/kvm
:commands executed to perform actions
Note: If OpenNebula was installed in system wide mode these directories become /usr/lib/one
and /etc/one/
, respectively. The rest of this guide refers to the $ONE_LOCATION
paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here.
OpenNebula needs to know if it is going to use the KVM Driver. To achieve this, two lines have to be placed within $ONE_LOCATION/etc/oned.conf
, one for the VM driver and other for the information (IM) driver:
IM_MAD = [ name = "im_kvm", executable = "one_im_ssh", arguments = "kvm" ] VM_MAD = [ name = "vmm_kvm", executable = "one_vmm_sh", arguments = "kvm", default = "vmm_sh/vmm_sh_kvm.conf", type = "kvm" ]
$ONE_LOCATION/lib/mads
.$ONE_LOCATION/etc
.The driver uses two configuration files, by default placed in OpenNebula installation directory:
oned.conf
, usually $ONE_LOCATION/etc/vmm_sh/vmm_sh_kvm.conf
. This file is home for default values for domain definitions (in other words, for OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their KVM domain definitions, simply edit the $ONE_LOCATION/etc/vmm_sh/vmm_sh_kvm.conf
file and set a CPU=0.6
into it. Now, when defining a VM to be sent to a KVM resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.
It is generally a good idea to place defaults for the KVM-specific attributes, that is, attributes mandatory in the KVM driver that are not mandatory for other hypervisors. Non mandatory attributes for KVM but specific to them are also recommended to have a default.
$ONE_LOCATION/lib/remotes/vmm/kvm/kvmrc
file holds instructions to be executed before the actual driver load to perform specific tasks or to pass environmental variables to the driver. The syntax used for the former is plain shell script that will be evaluated before the driver execution. For the latter, the syntax is the familiar:ENVIRONMENT_VARIABLE=VALUE
The following are template attributes specific to KVM, please refer to the OpenNebula user guide for a complete list of the attributes supported to define a VM.
Specify the boot device to consider in the OS attribute, using BOOT. Valid values for BOOT are fd
, hd
, cdrom
or network
, that corresponds to the –boot [a|c|d|n]
option of the kvm
command, respectively. Also, the ARCH attribute must be filled within the OS section, with the appropriate architecture value where the VM OS can be run on (eg: “x86_64”,“i686”).
For example:
OS=[ KERNEL = /vmlinuz, BOOT = hd, ARCH = "x86_64"]
In general you will not need to set the following attributes for you VMs, they are provided to let you fine tune the VM deployment with KVM.
disk
(default), cdrom
or floppy
. This attribute corresponds to the media
option of the -driver
argument of the kvm
command.ide
, scsi
or pflash
. This attribute corresponds to the if
option of the -driver
argument of the kvm
command.raw
, qcow2
… This attribute corresponds to the format
option of the -driver
argument of the kvm
command.ifname
option of the '-net' argument of the kvm
command.script
option of the '-net' argument of the kvm
command.$ kvm -net nic,model=? -nographic /dev/null
Virtio is the framework for IO virtualization in KVM. You will need a linux kernel with the virtio drivers for the guest, check check the KVM documentation for more info.
If you want to use the virtio drivers add the following attributes to your devices:
DISK
, add the attribute bus=virtio
NIC
, add the attribute model=virtio
Format and valid values:
FEATURES=[ pae={yes|no}, acpi={yes|no} ]
Default values for this features can be set in the driver configuration file so they don't need to be specified for every VM.
The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to KVM. Basically, everything placed here will be written literally into the KVM deployment file (use libvirt xml format and semantics).
RAW = [ type = "kvm", data = "<devices><serial type=\"pty\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></serial><console type=\"pty\" tty=\"/dev/pts/5\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></console></devices>" ]