VMware Drivers 3.4

The VMware Drivers enable the management of an OpenNebula cloud based on VMware ESX and/or VMware Server hypervisors. It uses libvirt to invoke the Virtual Infrastructure SOAP API exposed by the VMware hypervisors, and it entails a much better integration and performance than the java based drivers traditionally found in the OpenNebula distribution.

It features a simple configuration process that will leverage the stability, performance and feature set of any existing VMware based OpenNebula cloud.

inlinetoc

Requirements

In order to use the VMware Drivers, some software dependencies have to be met:

  • libvirt: LibVirt is used to access the VMware hypervisors , so it needs to be installed with ESX support. We recommend version 0.8.3 or higher, which enables interaction with the vCenter VMware product, required to use vMotion.
  • ESX, VMware Server: At least one VMware hypervisor needs to be installed. Further configuration for the DATASTORE is needed, and it is explained in the TM part of the Configuration section.

Optional Requirements. To enable some OpenNebula features you may need:

  • ESX CLI (Network): In order to use the extended networking capabilities provided by the Networking subsystem for VMware, the vSphere CLI needs to be installed.
  • vMotion: VMware's vMotion capabilities allows to perform live migration of a Virtual Machine between two ESX hosts, allowing for load balancing between cloud worker nodes without downtime in the migrated virtual machine. In order to use this capability, the following requisites have to be meet:
    • Shared storage between the source and target host, mounted in both hosts as the same DATASTORE (we are going to assume it is called “images” in the rest of this document)
    • vCenter Server installed and configured, details in Section 11 of the Installation Guide for ESX and vCenter.
    • A datacenter created in the vCenter server that includes all ESX hosts between which Virtual Machines want to be live migrated (we are going to assume it is called “onecenter” in the rest of this document).
    • A user created in vCenter with the same username and password than the ones in the ESX hosts, with administrator permissions.

:!: Please note that the libvirt version shipped with some linux distribution does not include ESX support. In these cases it may be needed to recompile the libvirt package with the –with-esx option.

Considerations & Limitations

  • Only one vCenter can be used for livemigration.
  • Cannot create swap disks.
  • libvirt < = 0.9.2 (and possibly higher versions) does not report consumption of the VMs, the only information retrieved from the VMs is the state.

VMware Configuration

Users & Groups

The creation of a user in the VMware hypervisor is recommended. Go to the Users & Group tab in the VI Client, and create a new user (for instance, “oneadmin”) with the same UID and username as the oneadmin user executing OpenNebula in the front-end. Please remember to give full permissions to this user (Permissions tab).

Since all the available DATASTORES in this version require SSH access, please remember to click the “Grant shell access to this user” checkbox.

The access via SSH needs to be passwordless. The following steps are only needed for ESX < 5.0:

  • login to the esx host (ssh <esx-host>)
  • become root (su)
  • $ chmod +w /etc/sudoers
  • $ vi /etc/sudoers
  • Comment the following line
#Defaults requiretty
  • Add the following line to avoid password request on sudo command
oneadmin ALL=(ALL) NOPASSWD: ALL
  • $ chmod -w /etc/sudoers

The following applies to all ESX versions:

  • Add oneadmin's front-end account public key (FE → $HOME/.ssh/id_{rsa,dsa}.pub) to the ESX oneadmin account authorized_keys (ESX → $HOME/.ssh/authorized_keys). More information on passwordless ssh connections here

Storage

There are several possible configurations regarding storage. Considerations must be made for the system datastore and for the vmware datastores, which can be configured with different transfer managers: ssh, shared and vmware specific. Please refer to the VMware Datastore guide for more details.

Networking

Networking can be used in the two different modes: pre-defined (to use pre-defined port groups) or dynamic (to dynamically create port groups and VLAN tagging). Please refer to the VMware Networking guide for more details.

OpenNebula Configuration

OpenNebula Daemon

In order to configure OpenNebula to work with the VMware drivers, the following sections need to be uncommented in the /etc/one/oned.conf file.

#-------------------------------------------------------------------------------
#  VMware Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
    name       = "vmm_vmware",
    executable = "one_vmm_sh",
    arguments  = "-t 15 -r 0 vmware",
    default    = "vmm_exec/vmm_exec_vmware.conf",
    type       = "vmware" ]

#-------------------------------------------------------------------------------
#  VMware Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
    name       = "im_vmware",
    executable = "one_im_sh",
    arguments  = "-t 15 -r 0 vmware" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Transfer Manager Driver Configuration
#-------------------------------------------------------------------------------
TM_MAD = [
    executable = "one_tm",
    arguments  = "-t 15 -d dummy,lvm,shared,ssh,vmware,iscsi" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Datastore Manager Driver Configuration
#-------------------------------------------------------------------------------
DATASTORE_MAD = [
    executable = "one_datastore",
    arguments  = "-t 15 -d fs,vmware,iscsi" ]
#-------------------------------------------------------------------------------

VMware Drivers

The configuration attributes for the VMware drivers are set in the /etc/one/vmwarerc file. In particular the following values can be set:

SCHEDULER OPTIONS DESCRIPTION
:libvirt_uri used to connect to VMware through libvirt. When using VMware Server, the connection string set under LIBVIRT_URI needs to have its prefix changed from esx to gsx
:username username to access the VMware hypervisor
:password password to access the VMware hypervisor
:datacenter (only for vMotion) name of the datacenter where the hosts have been registered.
:vcenter (only for vMotion) name or IP of the vCenter that manage the ESX hosts

Example of the configuration file:

:libvirt_uri: "esx://@HOST@/?no_verify=1&auto_answer=1"
:username: "oneadmin"
:password: "mypass"
:datacenter: "ha-datacenter"
:vcenter: "London-DC"

:!: Please be aware that the above rc file, in stark contrast with other rc files in OpenNebula, uses yaml syntax, therefore please input the values between quotes.

Finally you need to set the name of the system datastore to be used in the vSphere hosts in /etc/one/vmm_exec/vmm_exec_vmware.conf. More details on datastores for VMware here.

VMware physical hosts

The physical hosts containing the VMware hypervisors need to be added with the appropriate VMware Drivers. If the box running the VMware hypervisor is called, for instance, esx-host, the host would need to be registered with the following command (dynamic netwotk mode):

$ onehost create esx-host -i im_vmware -v vmm_vmware -n vmware

or for pre-defined networking

$ onehost create esx-host -i im_vmware -v vmm_vmware -n dummy

Usage

Virtual Networks

Please refer to the VMware Networking guide for the Virtual Network attributes supported for VMware-based dataceneters.

Images

The Datastores subsystem introduced in OpenNebula v3.4 needs to be used in order to register images in OpenNebula catalog. .

To register an existing VMware disk you need to:

  • Place all the .vmdk files that conform a disk (they can be easily spotted, there is a main <name-of-the-image>.vmdk file, and various <name-of-the-image-sXXX.vmdk flat files) in the same directory, with no more files than these.
  • Afterwards, a image template needs to be written, using the the absolut path to the directory as the PATH value. For example:
NAME = MyVMwareDisk
PATH =/absolute/path/to/disk/folder
TYPE = OS

Once registered the image can be used as any other image in the OpenNebula system as described in the Virtual Machine Images guide.

Virtual Machines

Following the two last sections, we can use a template for a VMware VM like:

NAME = myVMwareVM

CPU    = 1
MEMORY = 256

DISK = [IMAGE_ID="7"]
NIC  = [NETWORK="public"]

Tuning & Extending

The VMware Drivers consists of three drivers, with their corresponding files:

  • VMM Driver
    • /var/lib/one/remotes/vmm/vmware : commands executed to perform actions.
  • IM Driver
    • /var/lib/one/remotes/im/vmware.d : vmware IM probes.
  • TM Driver
    • /usr/lib/one/tm_commands : commands executed to perform transfer actions.

And the following driver configuration files:

  • VMM Driver
    • /etc/one/vmm_exec/vmm_exec_vmware.conf : This file is home for default values for domain definitions (in other words, OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their VMware domain definitions, simply edit the /etc/one/vmm_exec/vmm_exec_vmware.conf file and set a
  CPU=0.6

into it. Now, when defining a template to be sent to a VMware resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.

It is generally a good idea to place defaults for the VMware-specific attributes, that is, attributes mandatory for the VMware hypervisor that are not mandatory for other hypervisors. Non mandatory attributes for VMware but specific to them are also recommended to have a default.

  • TM Driver
    • /etc/one/tm_vmware/tm_vmware.conf : This files contains the scripts tied to the different actions that the TM driver can deliver. You can here deactivate functionality like the DELETE action (this can be accomplished using the dummy tm driver, dummy/tm_dummy.sh) or change the default behavior.

More generic information about drivers: