VMware Drivers 3.2

The VMware Drivers enable the management of an OpenNebula cloud based on VMware ESX and/or VMware Server hypervisors. It uses libvirt to invoke the Virtual Infrastructure SOAP API exposed by the VMware hypervisors, and it entails a much better integration and performance than the java based drivers traditionally found in the OpenNebula distribution.

It features a simple configuration process that will leverage the stability, performance and feature set of any existing VMware based OpenNebula cloud.

inlinetoc

Requirements

In order to use the VMware Drivers, some software dependencies have to be met:

  • libvirt: LibVirt is used to access the VMware hypervisors , so it needs to be installed with ESX support. We recommend version 0.8.3 or higher, which enables interaction with the vCenter VMware product.
  • ESX, VMware Server: At least one VMware hypervisor needs to be installed. Further configuration for the DATASTORE is needed, and it is explained in the TM part of the Configuration section.

Optional Requirements. To enable some OpenNebula features you may need:

  • ESX CLI (Network): In order to use the extended networking capabilities provided by the Networking subsystem for VMware, the vSphere CLI needs to be installed.
  • vMotion: VMware's vMotion capabilities allows to perform live migration of a Virtual Machine between two ESX hosts, allowing for load balancing between cloud worker nodes without downtime in the migrated virtual machine. In order to use this capability, the following requisites have to be meet:
    • Shared storage between the source and target host, mounted in both hosts as the same DATASTORE (we are going to assume it is called “images” in the rest of this document)
    • vCenter Server installed and configured, details in Section 11 of the Installation Guide for ESX and vCenter.
    • A DATACENTER created in the vCenter server that includes all ESX hosts between which Virtual Machines want to be live migrated (we are going to assume it is called “Onecenter” in the rest of this document).
    • A user created in vCenter with the same username and password than the ones in the ESX hosts, with administrator permissions.

:!: Please note that the libvirt version shipped with some linux distribution does not include ESX support. In these cases it may be needed to recompile the libvirt package with the –with-esx option.

:!: A license compatible with the remote access to VMware hypervisor's API is needed. For instance, free ESXi license doesn't allow this, while the evaluation license for the same hypervisor does allow the remote access.

Considerations & Limitations

  • The hook functionality is restricted to the local hooks, since there is no ssh connection to the VMware ESX hypervisors to perform remote scripts.
  • When the NFS server is in another separate server from the OpenNebula front-end, the OpenNebula front-end needs to mount in /var/lib/one the same NFS export mounted in the ESX hypervisors as the DATASTORE. If using SQLite, this limitation should be taken into account.
  • Currently, the creation of empty DATABLOCKS (as explained in the Image Repository guide) is not available for VMware.
  • Only one vCenter can be used for livemigration.
  • A VM can be stucked in the hypervisor waiting for an answer of the kind retry/cancel, this should be answered through the VI Client. Future versions of OpenNebula will deal with these questions.

VMware Configuration

Users & Groups

The creation of a user in the VMware hypervisor is recommended. Go to the Users & Group tab in the VI Client, and create a new user (for instance, “oneadmin”) with the same UID and username as the oneadmin user executing OpenNebula in the front-end. Please remember to give full permissions to this user (Permissions tab).

Storage

The storage configuration needs the /var/lib/one folder of the front-end (the server running the OpenNebula instance) to be exported as a NFS share, to be mounted as a DATASTORE in the VMware hypervisor. The name of the datastore needs to be set in the VMM configuration.

Note also that the Image Repository must be point to a path contained in /var/lib/one, which is the folder exported as NFS. This will effectively deal with the special VMware storage requirements.

The front-end should export the share with the appropriate flags so the files created by the VMware hypervisor can be managed by OpenNebula. An example of a configuration line in /etc/exports:

/var/lib/one 192.168.1.0/24(rw,sync,no_subtree_check,root_squash,anonuid=9001,anongid=9001)

where 9001 is the UID and GID of “oneadmin” user in the front-end.

:!: If the Image Repository is not used note that the .vmdk files need to reside inside NFS volume exported as datastore. Also the descriptor .vmdk file needs to be renamed to disk.vmdk, and the other flat .vmdk files needs to be left untouched.

Networking

Networking can be used in the two different modes: pre-defined (to use pre-defined port groups) or dynamic (to dynamically create port groups and VLAN tagging). Please refer to the VMware Networking guide for more details.

OpenNebula Configuration

OpenNebula Daemon (oned.conf)

In order to configure OpenNebula to work with the VMware drivers, the following sections need to be uncommented in the /etc/one/oned.conf file.

#-------------------------------------------------------------------------------
#  VMware Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
    name       = "vmm_vmware",
    executable = "one_vmm_sh",
    arguments  = "-t 15 -r 0 vmware",
    default    = "vmm_sh/vmm_sh_vmware.conf",
    type       = "vmware" ]

    
#-------------------------------------------------------------------------------
#  VMware Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
    name       = "im_vmware",
    executable = "one_im_sh",
    arguments  = "-t 15 -r 0 vmware" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# VMware Transfer Manager Driver Configuration
#-------------------------------------------------------------------------------
TM_MAD = [
    name       = "tm_vmware",
    executable = "one_tm",
    arguments  = "tm_vmware/tm_vmware.conf" ]
#-------------------------------------------------------------------------------

VMware Drivers (vmwarerc and vmm_exec/vmm_exec_vmware.conf)

The configuration attributes for the VMware drivers are set in the /etc/one/vmwarerc file. In particular the following values can be set:

SCHEDULER OPTIONS DESCRIPTION
:libvirt_uri used to connect to VMware through libvirt. When using VMware Server, the connection string set under LIBVIRT_URI needs to have its prefix changed from esx to gsx
:username username to access the VMware hypervisor
:password password to access the VMware hypervisor
:datacenter (only for vMotion) name of the datacenter where the hosts have been registered.
:vcenter (only for vMotion) name or IP of the vCenter that manage the ESX hosts

Example of the configuration file:

:libvirt_uri: "esx://@HOST@/?no_verify=1"
:username: "oneadmin"
:password: "mypass"
:datacenter: "ha-datacenter"
:vcenter: "London-DC"

:!: Please be aware that the above rc file, in stark contrast with other rc files in OpenNebula, uses yaml syntax, therefore please input the values between quotes.

Finally you need to set the name of the datastore to be used in the vSphere hosts in /etc/one/vmm_exec/vmm_exec_vmware.conf

OpenNebula hosts

The physical hosts containing the VMware hypervisors need to be added with the appropriate VMware Drivers. If the box running the VMware hypervisor is called, for instance, esx-host, the host would need to be registered with the following command (dynamic netwotk mode):

$ onehost create esx-host im_vmware vmm_vmware tm_vmware vmware

or for pre-defined networking

$ onehost create esx-host im_vmware vmm_vmware tm_vmware dummy

Usage

Virtual Networks

Please refer to the VMware Networking guide for the Virtual Network attributes supported for VMware-based dataceneters.

Images

The Image Catalog introduced in OpenNebula v2.0 can also be used with the VMware Drivers and the oneimage command can be used to register images in the Catalog.

To register an existing VMware disk you need to:

  • Place all the .vmdk files that conform a disk (they can be easily spotted, there is a main <name-of-the-image>.vmdk file, and various <name-of-the-image-sXXX.vmdk flat files) in the same directory, with no more files than these.
  • Afterwards, an image template needs to be written, using the vmware:// prefix for the PATH, referring to the location of the folder within the OpenNebula front-end. For example:
NAME = MyVMwareDisk
PATH = vmware:///absolute/path/to/disk/folder
TYPE = OS

Once registered with the “oneimage create” command the image can be used as any other image in the OpenNebula system as described in the Virtual Machine Images guide.

Virtual Machines

Following the two last sections, we can use a template for a VMware VM like:

NAME = myVMwareVM

CPU    = 1
MEMORY = 256

DISK = [IMAGE_ID="7"]
NIC  = [NETWORK="public"]

Tuning & Extending

The VMware Drivers consists of three drivers, with their corresponding files:

  • VMM Driver
    • /var/lib/one/remotes/vmm/vmware : commands executed to perform actions.
  • IM Driver
    • /var/lib/one/remotes/im/vmware.d : vmware IM probes.
  • TM Driver
    • /usr/lib/one/tm_commands : commands executed to perform transfer actions.

And the following driver configuration files:

  • VMM Driver
    • /etc/one/vmm_exec/vmm_exec_vmware.conf : This file is home for default values for domain definitions (in other words, OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their VMware domain definitions, simply edit the /etc/one/vmm_exec/vmm_exec_vmware.conf file and set a
  CPU=0.6

into it. Now, when defining a template to be sent to a VMware resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the VMware-specific attributes, that is, attributes mandatory in the VMware driver that are not mandatory for other hypervisors. Non mandatory attributes for VMware but specific to them are also recommended to have a default.

  • TM Driver
    • /etc/one/tm_vmware/tm_vmware.conf : This files contains the scripts tied to the different actions that the TM driver can deliver. You can here deactivate functionality like the DELETE action (this can be accomplished using the dummy tm driver, dummy/tm_dummy.sh) or change the default behavior.

More generic information about drivers: