KVM Driver 3.0

KVM (Kernel-based Virtual Machine) is a complete virtualization technique for Linux. It offers full virtualization, where each Virtual Machine interacts with its own virtualized hardware. This guide describes the use of the KVM virtualizer with OpenNebula, please refer to KVM specific documentation for further information on the setup of the KVM hypervisor itself.

inlinetoc

Requirements

The hosts must have a working installation of KVM, that usually requires:

  • CPU with VT extensions
  • libvirt >= 0.4.0
  • kvm kernel modules (kvm.ko, kvm-{intel,amd}.ko). Available from kernel 2.6.20 onwards.
  • the qemu user-land tools

Considerations & Limitations

  • KVM currently only supports 4 IDE devices. You have to take this into account when adding disks.
    • Contextualization ISO uses hdc.
    • Datablock images use hde, disk TARGET should be set manually.
    • For more disk devices you should better use SCSI or virtio.
  • By default live migrations are started from the host the VM is currently running. If this is a problem in your setup you can activate local live migration adding -l migrate=migrate_local to vmm_mad arguments.
  • If you get error messages similar to error: cannot close file: Bad file descriptor upgrade libvirt version. Version 0.8.7 has a bug related to file closing operations. https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=672725

Configuration

KVM Configuration

OpenNebula uses the libvirt interface to interact with KVM, so the following steps are required in the hosts to get the KVM driver running:

  • Qemu should be configured to not change file ownership. Modify /etc/libvirt/qemu.conf to include dynamic_ownership = 0. To be able to use the images copied by OpenNebula, change also the user and group under which the libvirtd is run to “oneadmin”.
  • The remote hosts must have the libvirt daemon running.
  • The user with access to these remotes hosts on behalf of OpenNebula (typically <oneadmin>) has to pertain to the <libvirtd> and <kvm> groups in order to use the deaemon and be able to launch VMs.

:!: If apparmor is active (by default in Ubuntu it is), you should add /var/lib/one to the end of /etc/apparmor.d/libvirt-qemu

<xterm> owner /var/lib/one/** rw, </xterm>

OpenNebula uses libvirt's migration capabilities. More precisely, it uses the TCP protocol offered by libvirt. In order to configure the physical hosts, the following files have to be modified:

  • /etc/libvirt/libvirtd.conf : Uncomment “listen_tcp = 1”. Security configuration is left to the admin's choice, file is full of useful comments to achieve a correct configuration.
  • /etc/default/libvirt-bin : add -l option to libvirtd_opts

OpenNebula Configuration

OpenNebula needs to know if it is going to use the KVM Driver. To achieve this, uncomment these drivers in /etc/one/oned.conf:

    IM_MAD = [
        name       = "im_kvm",
        executable = "one_im_ssh",
        arguments  = "-r 0 -t 15 kvm" ]

    VM_MAD = [
        name       = "vmm_kvm",
        executable = "one_vmm_exec",
        arguments  = "-t 15 -r 0 kvm",
        default    = "vmm_exec/vmm_exec_kvm.conf",
        type       = "kvm" ]

Usage

The following are template attributes specific to KVM, please refer to the template reference documentation for a complete list of the attributes supported to define a VM.

Mandatory Attributes

Specify the boot device to consider in the OS attribute, using BOOT. Valid values for BOOT are fd, hd, cdrom or network, that corresponds to the –boot [a|c|d|n] option of the kvm command, respectively. Also, the ARCH attribute must be filled within the OS section, with the appropriate architecture value where the VM OS can be run on (eg: “x86_64”,“i686”).

For example:

    OS=[ 
      KERNEL = /vmlinuz,
      BOOT   = hd,
      ARCH   = "x86_64"]

Optional Attributes

In general you will not need to set the following attributes for you VMs, they are provided to let you fine tune the VM deployment with KVM.

DISK

  • type, This attribute defines the type of the media to be exposed to the VM, possible values are: disk (default), cdrom or floppy. This attribute corresponds to the media option of the -driver argument of the kvm command.
  • bus, specifies the type of disk device to emulate; possible values are driver specific, with typical values being ide, scsi or pflash. This attribute corresponds to the if option of the -driver argument of the kvm command.
  • driver, specifies the format of the disk image; possible values are raw, qcow2… This attribute corresponds to the format option of the -driver argument of the kvm command.
  • cache, specifies the optional cache mechanism, possible values are “default”, “none”, “writethrough” and “writeback”.

NIC

  • target, name for the tun device created for the VM. It corresponds to the ifname option of the '-net' argument of the kvm command.
  • script, name of a shell script to be executed after creating the tun device for the VM. It corresponds to the script option of the '-net' argument of the kvm command.
  • model, ethernet hardware to emulate. You can get the list of available models with this command:
$ kvm -net nic,model=? -nographic /dev/null
  • filter to define a network filtering rule for the interface. Libvirt includes some predefined rules (e.g. clean-traffic) that can be used. Check the Libvirt documentation for more information, you can also list the rules in your system with:
$ virsh -c qemu:///system nwfilter-list

Virtio

Virtio is the framework for IO virtualization in KVM. You will need a linux kernel with the virtio drivers for the guest, check the KVM documentation for more info.

If you want to use the virtio drivers add the following attributes to your devices:

  • DISK, add the attribute bus=virtio
  • NIC, add the attribute model=virtio

FEATURES

  • pae: Physical address extension mode allows 32-bit guests to address more than 4 GB of memory:
  • acpi: useful for power management, for example, with KVM guests it is required for graceful shutdown to work.

Format and valid values:

    FEATURES=[
        pae={yes|no},   
        acpi={yes|no} ]

Default values for this features can be set in the driver configuration file so they don't need to be specified for every VM.

Additional Attributes

The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to KVM. Basically, everything placed here will be written literally into the KVM deployment file (use libvirt xml format and semantics).

  RAW = [ type = "kvm",
          data = "<devices><serial type=\"pty\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></serial><console type=\"pty\" tty=\"/dev/pts/5\"><source path=\"/dev/pts/5\"/><target port=\"0\"/></console></devices>" ]

Tuning & Extending

The driver consists of the following files:

  • /usr/lib/one/mads/one_vmm_exec : generic VMM driver.
  • /var/lib/one/remotes/vmm/kvm : commands executed to perform actions.

And the following driver configuration files:

  • /etc/one/vmm_exec/vmm_exec_kvm.conf : This file is home for default values for domain definitions (in other words, OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their KVM domain definitions, simply edit the /etc/one/vmm_exec/vmm_exec_kvm.conf file and set a
  CPU=0.6

into it. Now, when defining a template to be sent to a KVM resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the KVM-specific attributes, that is, attributes mandatory in the KVM driver that are not mandatory for other hypervisors. Non mandatory attributes for KVM but specific to them are also recommended to have a default.

  • /var/lib/one/remotes/vmm/kvm/kvmrc : This file holds instructions to be executed before the actual driver load to perform specific tasks or to pass environmental variables to the driver. The syntax used for the former is plain shell script that will be evaluated before the driver execution. For the latter, the syntax is the familiar:
  ENVIRONMENT_VARIABLE=VALUE

See the Virtual Machine drivers reference for more information.