VMware Drivers 3.8

The VMware Drivers enable the management of an OpenNebula cloud based on VMware ESX and/or VMware Server hypervisors. It uses libvirt to invoke the Virtual Infrastructure SOAP API exposed by the VMware hypervisors, and it entails a much better integration and performance than the java based drivers traditionally found in the OpenNebula distribution.

It features a simple configuration process that will leverage the stability, performance and feature set of any existing VMware based OpenNebula cloud.

inlinetoc

Requirements

In order to use the VMware Drivers, some software dependencies have to be met:

  • libvirt: Libvirt is used to access the VMware hypervisors , so it needs to be installed with ESX support. We recommend version 0.8.3 or higher, which enables interaction with the vCenter VMware product, required to use vMotion.
  • ESX, VMware Server: At least one VMware hypervisor needs to be installed. Further configuration for the DATASTORE is needed, and it is explained in the TM part of the Configuration section.

Optional Requirements. To enable some OpenNebula features you may need:

  • ESX CLI (Network): In order to use the extended networking capabilities provided by the Networking subsystem for VMware, the vSphere CLI needs to be installed.
  • vMotion: VMware's vMotion capabilities allows to perform live migration of a Virtual Machine between two ESX hosts, allowing for load balancing between cloud worker nodes without downtime in the migrated virtual machine. In order to use this capability, the following requisites have to be meet:
    • Shared storage between the source and target host, mounted in both hosts as the same DATASTORE (we are going to assume it is called “images” in the rest of this document)
    • vCenter Server installed and configured, details in Section 11 of the Installation Guide for ESX and vCenter.
    • A datacenter created in the vCenter server that includes all ESX hosts between which Virtual Machines want to be live migrated (we are going to assume it is called “onecenter” in the rest of this document).
    • A user created in vCenter with the same username and password than the ones in the ESX hosts, with administrator permissions.

:!: Please note that the libvirt version shipped with some linux distribution does not include ESX support. In these cases it may be needed to recompile the libvirt package with the –with-esx option.

Considerations & Limitations

  • Only one vCenter can be used for livemigration.
  • Cannot create swap disks.
  • libvirt < = 0.9.2 (and possibly higher versions) does not report consumption of the VMs, the only information retrieved from the VMs is the state.
  • In order to use the attach/detach functionality, the original VM must have at least one SCSI disk, and the disk to be attached/detached must be placed on a SCSI bus (ie, “sd” as DEV_PREFIX).
  • :!: ESX 5.1 not supported yet
  • :!: The ESX hosts need to be properly licensed, with write access to the exported API (as the Evaluation license does). More information on valid licenses here.
  • SELinux configuration in CentOS may be a source of problems in the Information Driver. If you are experiencing a “Permission denied” when the IM tries to monitor a host, try disabling SELinux to narrow down the problem. This can be done by editing /etc/selinux/config, and setting:

<xterm> SELINUX=disabled </xterm>

VMware Configuration

Users & Groups

The creation of a user in the VMware hypervisor is recommended. Go to the Users & Group tab in the VI Client, and create a new user (for instance, “oneadmin”) with the same UID and username as the oneadmin user executing OpenNebula in the front-end. Please remember to give full permissions to this user (Permissions tab).

Since all the available DATASTORES in this version require SSH access, please remember to click the “Grant shell access to this user” checkbox. Also, if we want to use the vmware datastore, we are going to need to add “oneadmin” to the root group.

The access via SSH needs to be passwordless. Please follow the next steps to configure the ESX node:

  • login to the esx host (ssh <esx-host>)

For ESX 5.x

<xterm> $ su - $ mkdir /etc/ssh/keys-oneadmin $ chmod 755 /etc/ssh/keys-oneadmin $ su - oneadmin $ vi /etc/ssh/keys-oneadmin/authorized_keys <paste here the contents of the oneadmin's front-end account public key (FE → $HOME/.ssh/id_{rsa,dsa}.pub) and exit vi> $ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys </xterm>

For ESX 4.x configuration, see the Appendix.

More information on passwordless ssh connections here.

:!: After registering a datastore, make sure that the “oneadmin” user can write in said datastore (this is not needed if the “root” user is used to access the ESX). In case “oneadmin” cannot write in “/vmfs/volumes/<ds_id>”, then permissions need to be adjusted. This can be done in various ways, the recommended one being:

  • Add “oneadmin” to the “root” group using the Users & Group tab in the VI Client
  • $chmod g+w /vmfs/volumes/<ds_id> in the ESX host

:!: If you plan to use the vmware or vmfs datastores, a couple of steps remain:

<xterm> $ su $ chmod +s /sbin/vmkfstools </xterm>

:!: In order to use the attach/detach functionality for VM disks, some extra configuration steps are needed.

:!: Persistency of the ESX filesystem has to be handled with care. Most of ESX 5 files reside in a in-memory filesystem, meaning faster access and also non persistency across reboots, which can be inconvenient at the time of managing a ESX farm for a OpenNebula cloud.

Here is a recipe to make the configuration needed for OpenNebula persistent across reboots. The changes need to be done as root.

<xterm> # vi /etc/rc.local

 ## Add this at the bottom of the file
 

mkdir /etc/ssh/keys-oneadmin cat > /etc/ssh/ssh-oneadmin/authorized_keys « _SSH_HEYS_ ssh-rsa <really long string with oneadmin's ssh public key> _SSH_KEYS_ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys chmod +s /sbin/vmkfstools /bin/vim-cmd chmod 755 /etc/ssh/keys-oneadmin chown oneadmin /etc/ssh/keys-oneadmin/authorized_keys

# /sbin/auto-backup.sh </xterm>

This information was based on this blog post.

Storage

There are several possible configurations regarding storage. Considerations must be made for the system datastore and for the vmware or vmfs datastores, which can be configured with different transfer managers: ssh, shared and vmware specific. Please refer to the VMware Storage Model guide for more details.

:!: In order to get volatile disks functionality (ie, OpenNebula to be capable of creating RAW disks on the fly), the ESX hosts needs to be able to execute 'vmkfstools' and 'vim-cmd' as non-root). To achieve this, please run the following in the ESX hosts as root:

<xterm> $ chmod +s /sbin/vmkfstools /bin/vim-cmd </xterm>

Networking

Networking can be used in the two different modes: pre-defined (to use pre-defined port groups) or dynamic (to dynamically create port groups and VLAN tagging). Please refer to the VMware Networking guide for more details.

VNC

In order to access running VMs through VNC, the ESX host needs to be configured beforehand, basically to allow VNC inbound connections via their firewall.

For ESX 5.x please follow this guide. For ESX 4.x take a look at the Appendix.

OpenNebula Configuration

OpenNebula Daemon

In order to configure OpenNebula to work with the VMware drivers, the following sections need to be uncommented in the /etc/one/oned.conf file.

#-------------------------------------------------------------------------------
#  VMware Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
    name       = "vmm_vmware",
    executable = "one_vmm_sh",
    arguments  = "-t 15 -r 0 vmware",
    default    = "vmm_exec/vmm_exec_vmware.conf",
    type       = "vmware" ]

#-------------------------------------------------------------------------------
#  VMware Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
    name       = "im_vmware",
    executable = "one_im_sh",
    arguments  = "-t 15 -r 0 vmware" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Transfer Manager Driver Configuration
#-------------------------------------------------------------------------------
TM_MAD = [
    executable = "one_tm",
    arguments  = "-t 15 -d dummy,lvm,shared,ssh,vmware,iscsi" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Datastore Manager Driver Configuration
#-------------------------------------------------------------------------------
DATASTORE_MAD = [
    executable = "one_datastore",
    arguments  = "-t 15 -d fs,vmware,iscsi" ]
#-------------------------------------------------------------------------------

VMware Drivers

The configuration attributes for the VMware drivers are set in the /etc/one/vmwarerc file. In particular the following values can be set:

SCHEDULER OPTIONS DESCRIPTION
:libvirt_uri used to connect to VMware through libvirt. When using VMware Server, the connection string set under LIBVIRT_URI needs to have its prefix changed from esx to gsx
:username username to access the VMware hypervisor
:password password to access the VMware hypervisor
:datacenter (only for vMotion) name of the datacenter where the hosts have been registered.
:vcenter (only for vMotion) name or IP of the vCenter that manage the ESX hosts

Example of the configuration file:

:libvirt_uri: "esx://@HOST@/?no_verify=1&auto_answer=1"
:username: "oneadmin"
:password: "mypass"
:datacenter: "ha-datacenter"
:vcenter: "London-DC"

:!: Please be aware that the above rc file, in stark contrast with other rc files in OpenNebula, uses yaml syntax, therefore please input the values between quotes.

VMware physical hosts

The physical hosts containing the VMware hypervisors need to be added with the appropriate VMware Drivers. If the box running the VMware hypervisor is called, for instance, esx-host, the host would need to be registered with the following command (dynamic netwotk mode):

$ onehost create esx-host -i im_vmware -v vmm_vmware -n vmware

or for pre-defined networking

$ onehost create esx-host -i im_vmware -v vmm_vmware -n dummy

Attach/Detach Functionality Configuration

The attach/detach functionality needs some special configuration both in the front-end and the worker node.

OpenNebula Daemon

  • In /etc/oned.conf, the variable SCRIPTS_REMOTE_DIR should point to /tmp/one

VMware Daemon

For ESX > 5.0

<xterm> $ su $ chmod +s /bin/vim-cmd </xterm>

:!: Due to conventions adopted by OpenNebula (mainly, having always the context disk placed as the second disk), the first new disk will be assigned the DISK_ID of 2 (assuming that the original VM only has one disk, otherwise it will be # DISKS + 1), the second added disk will have DISK_ID of 3 and so on. This is specially important for the detach functionality.

Usage

Virtual Networks

Please refer to the VMware Networking guide for the Virtual Network attributes supported for VMware-based dataceneters.

Images

The Datastores subsystem introduced in OpenNebula v3.4 needs to be used in order to register images in OpenNebula catalog. .

To register an existing VMware disk of type OS you need to:

  • Place all the .vmdk files that conform a disk (they can be easily spotted, there is a main <name-of-the-image>.vmdk file, and various <name-of-the-image-sXXX.vmdk flat files) in the same directory, with no more files than these.
  • Afterwards, a image template needs to be written, using the the absolut path to the directory as the PATH value. For example:
NAME = MyVMwareDisk
PATH =/absolute/path/to/disk/folder
TYPE = OS

:!: To register a .iso file with type CDROM there is no need to create a folder, just point with PATH to he absolute path of the .iso file.

Once registered the image can be used as any other image in the OpenNebula system as described in the Virtual Machine Images guide.

Due to the different tools available in the front-end and the ESX hosts, there is different possibilities at the time of creating DATABLOCKS or volatile disks, in terms of available formats that can be enforce to the newly created disks:

DATABLOCKS

New disks created in a particular datastore are created in the front-end, so the template of the DATABLOCK can reference in the FSTYPE attribute any value understood by mkfs unix command.

Virtual Machines

VOLATILE DISKS

At the time of defining a VM, a volatile disk can be described, which will be created in the remote ESX host. This is true for disks defined in the VM template, and also for the file needed in the “onevm attachdisk” operation. Here, it is not possible to format the disk, so it will appear as a raw device on the guest, which will then need to be formatted. Possible values for the FORMAT attribute (more info on this here):

  • vmdk_thin
  • vmdk_zeroedthick
  • vmdk_eagerzeroedthick

Following the two last sections, we can use a template for a VMware VM like:

NAME = myVMwareVM

CPU    = 1
MEMORY = 256

DISK = [IMAGE_ID="7"]
DISK = [FORMAT="vmdk_thin", SIZE="1024",TYPE="fs"]
NIC  = [NETWORK="public"]

Tuning & Extending

The VMware Drivers consists of three drivers, with their corresponding files:

  • VMM Driver
    • /var/lib/one/remotes/vmm/vmware : commands executed to perform actions.
  • IM Driver
    • /var/lib/one/remotes/im/vmware.d : vmware IM probes.
  • TM Driver
    • /usr/lib/one/tm_commands : commands executed to perform transfer actions.

And the following driver configuration files:

  • VMM Driver
    • /etc/one/vmm_exec/vmm_exec_vmware.conf : This file is home for default values for domain definitions (in other words, OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their VMware domain definitions, simply edit the /etc/one/vmm_exec/vmm_exec_vmware.conf file and set a
  CPU=0.6

into it. Now, when defining a template to be sent to a VMware resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the VMware-specific attributes, that is, attributes mandatory for the VMware hypervisor that are not mandatory for other hypervisors. Non mandatory attributes for VMware but specific to them are also recommended to have a default.

  • TM Driver
    • /etc/one/tm_vmware/tm_vmware.conf : This files contains the scripts tied to the different actions that the TM driver can deliver. You can here deactivate functionality like the DELETE action (this can be accomplished using the dummy tm driver, dummy/tm_dummy.sh) or change the default behavior.

More generic information about drivers:

Appendix

Passwordless SSH for ESX 4.x

  • Add oneadmin's front-end account public key (FE → $HOME/.ssh/id_{rsa,dsa}.pub) to the ESX oneadmin account authorized_keys (ESX → $HOME/.ssh/authorized_keys).
  • Fix permissions with “chown -R oneadmin /etc/ssh/keys-oneadmin”

VNC for ESX 4.x

In the vSphere client, go to Configuration tab, Software Security Profile, Properties of the firewall (in the right pane) and then tick the “VNC Server” checkbox.