VMware Drivers 4.0

The VMware Drivers enable the management of an OpenNebula cloud based on VMware ESX and/or VMware Server hypervisors. They use libvirt to invoke the Virtual Infrastructure SOAP API exposed by the VMware hypervisors, and feature a simple configuration process that will leverage the stability, performance and feature set of any existing VMware based OpenNebula cloud.

inlinetoc

Requirements

In order to use the VMware Drivers, some software dependencies have to be met:

  • libvirt: LibVirt is used to access the VMware hypervisors , so it needs to be installed with ESX support. We recommend version 1.0.0 or higher, which enables interaction with the vCenter VMware product, required to use vMotion.
  • ESX, VMware Server: At least one VMware hypervisor needs to be installed. Further configuration for the DATASTORE is needed, and it is explained in the TM part of the Configuration section.

Optional Requirements. To enable some OpenNebula features you may need:

  • ESX CLI (Network): In order to use the extended networking capabilities provided by the Networking subsystem for VMware, the vSphere CLI needs to be installed.
  • vMotion: VMware's vMotion capabilities allows to perform live migration of a Virtual Machine between two ESX hosts, allowing for load balancing between cloud worker nodes without downtime in the migrated virtual machine. In order to use this capability, the following requisites have to be meet:
    • Shared storage between the source and target host, mounted in both hosts as the same DATASTORE (we are going to assume it is called “images” in the rest of this document)
    • vCenter Server installed and configured, details in the Installation Guide for ESX and vCenter.
    • A datacenter created in the vCenter server that includes all ESX hosts between which Virtual Machines want to be live migrated (we are going to assume it is called “onecenter” in the rest of this document).
    • A user created in vCenter with the same username and password than the ones in the ESX hosts, with administrator permissions.

:!: Please note that the libvirt version shipped with some linux distribution does not include ESX support. In these cases it may be needed to recompile the libvirt package with the –with-esx option.

Considerations & Limitations

  • Only one vCenter can be used for livemigration.
  • Datablock images and volatile disk images will be always created without format. And have to be formatted by the guest.
  • libvirt < = 0.9.2 (and possibly higher versions) does not report consumption of the VMs, the only information retrieved from the VMs is the state.
  • In order to use the attach/detach functionality, the original VM must have at least one SCSI disk, and the disk to be attached/detached must be placed on a SCSI bus (ie, “sd” as DEV_PREFIX).
  • The ESX hosts need to be properly licensed, with write access to the exported API (as the Evaluation license does).

VMware Configuration

Users & Groups

The creation of a user in the VMware hypervisor is recommended. Go to the Users & Group tab in the VI Client, and create a new user (for instance, “oneadmin”) with the same UID and username as the oneadmin user executing OpenNebula in the front-end. Please remember to give full permissions to this user (Permissions tab).

:!: After registering a datastore, make sure that the “oneadmin” user can write in said datastore (this is not needed if the “root” user is used to access the ESX). In case “oneadmin” cannot write in “/vmfs/volumes/<ds_id>”, then permissions need to be adjusted. This can be done in various ways, the recommended one being:

  • Add “oneadmin” to the “root” group using the Users & Group tab in the VI Client
  • $chmod g+w /vmfs/volumes/<ds_id> in the ESX host

SSH access

Since almost all the available datastores in this version require SSH access (or, at least, they need it to unlock all the functionality of OpenNebula), please remember to click the “Grant shell access to this user” checkbox. Also, if we want to use the vmware datastore, we are going to need to add “oneadmin” to the root group.

The access via SSH needs to be passwordless. Please follow the next steps to configure the ESX node:

  • login to the esx host (ssh <esx-host>)

<xterm> $ su - $ mkdir /etc/ssh/keys-oneadmin $ chmod 755 /etc/ssh/keys-oneadmin $ su - oneadmin $ vi /etc/ssh/keys-oneadmin/authorized_keys <paste here the contents of the oneadmin's front-end account public key (FE → $HOME/.ssh/id_{rsa,dsa}.pub) and exit vi> $ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys </xterm>

More information on passwordless ssh connections here.

Tools setup: vmkfs & vim-cmd

<xterm> $ su $ chmod +s /sbin/vmkfstools </xterm>

  • In order to use the attach/detach functionality for VM disks, some extra configuration steps are needed in the ESX hosts. For ESX > 5.0

<xterm> $ su $ chmod +s /bin/vim-cmd </xterm>

Persistency

Persistency of the ESX filesystem has to be handled with care. Most of ESX 5 files reside in a in-memory filesystem, meaning faster access and also non persistency across reboots, which can be inconvenient at the time of managing a ESX farm for a OpenNebula cloud.

Here is a recipe to make the configuration needed for OpenNebula persistent across reboots. The changes need to be done as root.

<xterm> # vi /etc/rc.local

 ## Add this at the bottom of the file
 

mkdir /etc/ssh/keys-oneadmin cat > /etc/ssh/ssh-oneadmin/authorized_keys « _SSH_HEYS_ ssh-rsa <really long string with oneadmin's ssh public key> _SSH_KEYS_ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys chmod +s /sbin/vmkfstools /bin/vim-cmd chmod 755 /etc/ssh/keys-oneadmin chown oneadmin /etc/ssh/keys-oneadmin/authorized_keys

# /sbin/auto-backup.sh </xterm>

This information was based on this blog post.

Storage

There are several possible configurations regarding storage: using NFS Datastores or VMFS Datastores. Please refer to the VMware Storage Model guide for more details.

Networking

Networking can be used in two different modes: pre-defined (to use pre-defined port groups) or dynamic (to dynamically create port groups and VLAN tagging). Please refer to the VMware Networking guide for more details.

VNC

In order to access running VMs through VNC, the ESX host needs to be configured beforehand, basically to allow VNC inbound connections via their firewall.

For ESX 5.x please follow this guide.

OpenNebula Configuration

OpenNebula Daemon

  • In order to configure OpenNebula to work with the VMware drivers, the following sections need to be uncommented or added in the /etc/one/oned.conf file.
#-------------------------------------------------------------------------------
#  VMware Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
    name       = "vmware",
    executable = "one_vmm_sh",
    arguments  = "-t 15 -r 0 vmware -s sh",
    default    = "vmm_exec/vmm_exec_vmware.conf",
    type       = "vmware" ]

#-------------------------------------------------------------------------------
#  VMware Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
      name       = "vmware",
      executable = "one_im_sh",
      arguments  = "-c -t 15 -r 0 vmware" ]
#-------------------------------------------------------------------------------

SCRIPTS_REMOTE_DIR=/tmp/one

VMware Drivers

The configuration attributes for the VMware drivers are set in the /etc/one/vmwarerc file. In particular the following values can be set:

SCHEDULER OPTIONS DESCRIPTION
:libvirt_uri used to connect to VMware through libvirt. When using VMware Server, the connection string set under LIBVIRT_URI needs to have its prefix changed from esx to gsx
:username username to access the VMware hypervisor
:password password to access the VMware hypervisor
:datacenter (only for vMotion) name of the datacenter where the hosts have been registered.
:vcenter (only for vMotion) name or IP of the vCenter that manage the ESX hosts

Example of the configuration file:

:libvirt_uri: "esx://@HOST@/?no_verify=1&auto_answer=1"
:username: "oneadmin"
:password: "mypass"
:datacenter: "ha-datacenter"
:vcenter: "London-DC"

:!: Please be aware that the above rc file, in stark contrast with other rc files in OpenNebula, uses yaml syntax, therefore please input the values between quotes.

VMware physical hosts

The physical hosts containing the VMware hypervisors need to be added with the appropriate VMware Drivers. If the box running the VMware hypervisor is called, for instance, esx-host, the host would need to be registered with the following command (dynamic netwotk mode):

$ onehost create esx-host -i vmware -v vmware -n vmware

or for pre-defined networking

$ onehost create esx-host -i vmware -v vmware -n dummy

Usage

Images

To register an existing VMware disk in an OpenNebula image catalog you need to:

  • Place all the .vmdk files that conform a disk (they can be easily spotted, there is a main <name-of-the-image>.vmdk file, and various <name-of-the-image-sXXX.vmdk flat files) in the same directory, with no more files than these.
  • Afterwards, an image template needs to be written, using the the absolut path to the directory as the PATH value. For example:
NAME = MyVMwareDisk
PATH =/absolute/path/to/disk/folder
TYPE = OS

:!: To register a .iso file with type CDROM there is no need to create a folder, just point with PATH to he absolute path of the .iso file.

:!: In order to register a VMware disk through Sunstone, create a zip compressed tarball (.tar.gz) and upload that (it will be automatically decompressed in the datastore). Please note that the tarball is only of the folder with the .vmdk files inside, no extra directories can be contained in that folder.

Once registered the image can be used as any other image in the OpenNebula system as described in the Virtual Machine Images guide.

DATABLOCKS

Due to the different tools available in the front-end and the ESX hosts, there is different possibilities at the time of creating DATABLOCKS or volatile disks, in terms of available formats that can be enforce to the newly created disks:

The template of DATABLOCKs can only reference the value raw for the FSTYPE attribute in 'NFS datastores', and any of the following in 'VMFS datastores' (more info here):

  • vmdk_thin
  • vmdk_zeroedthick
  • vmdk_eagerzeroedthick

Virtual Machines

Volatalie DIsks

VOLATILE DISKS

At the time of defining a VM, a volatile disk can be described, which will be created in the remote ESX host. This is true for disks defined in the VM template, and also for the file needed in the “onevm attachdisk” operation.

It is not possible to format the disk, so it will appear as a raw device on the guest, which will then need to be formatted. Possible values for the FORMAT attribute (more info on this here):

  • vmdk_thin
  • vmdk_zeroedthick
  • vmdk_eagerzeroedthick

RAW/DATA_VMX

You can add metadata straight to the .vmx file using RAW/DATA_VMX. This comes in handy to specify for example a specific guestOS type, more info here.

Following the two last sections, if we want a VM of guestOS type “Windows 7 server 64bit”, with disks plugged into a LSI SAS SCSI bus, we can use a template like:

NAME = myVMwareVM

CPU    = 1
MEMORY = 256

DISK = [IMAGE_ID="7"]
NIC  = [NETWORK="public"]

RAW=[
  DATA="<devices><controller type='scsi' index='0' model='lsisas1068'/></devices>",
  DATA_VMX="pciBridge0.present = \"TRUE\"\npciBridge4.present = \"TRUE\"\npciBridge4.virtualDev = \"pcieRootPort\"\npciBridge4.functions = \"8\"\npciBridge5.present = \"TRUE\"\npciBridge5.virtualDev = \"pcieRootPort\"\npciBridge5.functions = \"8\"\npciBridge6.present = \"TRUE\"\npciBridge6.virtualDev = \"pcieRootPort\"\npciBridge6.functions = \"8\"\npciBridge7.present = \"TRUE\"\npciBridge7.virtualDev = \"pcieRootPort\"\npciBridge7.functions = \"8\"\nguestOS = \"windows7srv-64\"",
  TYPE="vmware" ]

Tuning & Extending

The VMware Drivers consists of three drivers, with their corresponding files:

  • VMM Driver
    • /var/lib/one/remotes/vmm/vmware : commands executed to perform actions.
  • IM Driver
    • /var/lib/one/remotes/im/vmware.d : vmware IM probes.
  • TM Driver
    • /usr/lib/one/tm_commands : commands executed to perform transfer actions.

And the following driver configuration files:

  • VMM Driver
    • /etc/one/vmm_exec/vmm_exec_vmware.conf : This file is home for default values for domain definitions (in other words, OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their VMware domain definitions, simply edit the /etc/one/vmm_exec/vmm_exec_vmware.conf file and set a
  CPU=0.6

into it. Now, when defining a template to be sent to a VMware resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the VMware-specific attributes, that is, attributes mandatory for the VMware hypervisor that are not mandatory for other hypervisors. Non mandatory attributes for VMware but specific to them are also recommended to have a default.

  • TM Driver
    • /etc/one/tm_vmware/tm_vmware.conf : This files contains the scripts tied to the different actions that the TM driver can deliver. You can here deactivate functionality like the DELETE action (this can be accomplished using the dummy tm driver, dummy/tm_dummy.sh) or change the default behavior.

More generic information about drivers:

Appendix

ESX 4.x Configuration

ESX 4.x configuration differs a bit from the ESX 5.x configuration, mainly in the following two aspects.

Passwordless SSH

  • Add oneadmin's front-end account public key (FE → $HOME/.ssh/id_{rsa,dsa}.pub) to the ESX oneadmin account authorized_keys (ESX → $HOME/.ssh/authorized_keys).
  • Fix permissions with “chown -R oneadmin /etc/ssh/keys-oneadmin”

VNC

In the vSphere client, go to Configuration tab, Software Security Profile, Properties of the firewall (in the right pane) and then tick the “VNC Server” checkbox.