VMware Driver Guide 1.4

The VMware Infrastructure API (VI API) provides a complete set of language-neutral interfaces to the VMware virtual infrastructure management framework. By targeting the VI API, the OpenNebula VMware drivers are able to manage various flavors of VMware hypervisors: ESXi (free), ESX and VMware Server.

VMware Configuration

Requirements

The front-end where OpenNebula is installed needs the following software:

Besides, a VMware hypervisor is needed in the cluster nodes. You can choose and install one of the following hypervisors:

User Configuration

All the VMware hypervisors (that will be accessed by the same set of drivers) needs to share the same user with the same password. This user can be created through the VMware Infrastructure Client (VIC), in the User & Groups. This user needs to have the same UID as the oneadmin user has in the OpenNebula front-end, and this can be set through the VIC as well.

:!: The VIC that can be downloaded via https://<esx-hostname>:443/. Please be aware that you will need a Windows machine in order to run it.

Datastore Configuration

The default recommended configuration of the storage for VMware hypervisors is to use a shared filesystem between them, ideally NFS. This share has to be accessible to the OpenNebula front-end. Its name is identified by $DATASTORE (all the hypervisors have to mount the share with the same name), and its location in the OpenNebula front-end by the environmental variable $DATASTORE_PATH.

This configuration can be set through the VIC as well, Configuration tab, Storage link in the left panel, “Add Storage” in the top right. You will need a NFS export with the “no_root_squash” option enabled (the VIC needs root permission to create and manage VMs).

Front-end Configuration

With respect to the front-end, the following steps need to be taken in order to configure properly the drivers:

  • The VMWare VI SDK following the Setup Guide. You should end up with a keystore containing VMWare certificates installed in the <oneadmin> home folder. Briefly you need to copy the /etc/vmware/ssl/rui.crt of all the hypervisors and add them to the OpenNebula front-end java keystore with a command like:

<xterm> keytool -import -file <certificate-filename> -alias <esx-server-name> -keystore vmware.keystore </xterm>

  • Add all jars in $AXISHOME/lib and $SDKHOME/samples/Axis/java/lib/ to <oneadmin> CLASSPATH

Driver Installation

  • Go to OpenNebula source code directory and navigate to src/vmm_mad/vmware. Run the install-vmware.sh script.

<xterm> $ cd src/vmm_mad/vmware $ ./install-vmware.sh </xterm>

Driver Files

The drivers consists of the following files:

Virtual Machine Manager (VMM)

  • $ONE_LOCATION/lib/mads/*.class : Driver libraries
  • $ONE_LOCATION/bin/one_vmm_vmware : Wrapper for VMware Virtual Machine Manager driver
  • $ONE_LOCATION/etc/vmm_vmware/vmm_vmwarerc : environment setup. Also useful for setting whether the driver should be verbose in the log entries.

Information Manager (IM)

  • $ONE_LOCATION/lib/mads/*.class : Driver libraries
  • $ONE_LOCATION/bin/one_im_vmware : Wrapper for VMware Information Manager driver
  • $ONE_LOCATION/etc/im_vmware/im_vmwarerc : environment setup. Also useful for setting whether if the driver should be verbose in the log entries.

Configuration

OpenNebula Configuration

OpenNebula needs to be told how to run the drivers. The place to do so is the $ONE_LOCATION/etc/oned.conf, it needs to have the VMware transfer, information and virtualization drivers set like the following lines:

#-------------------------------------------------------------------------------
#  VMWare Information Driver Manager sample configuration
#-------------------------------------------------------------------------------
  IM_MAD = [
      name       = "im_vmware",
      executable = "one_im_vmware",
      arguments  = "--username <esxi_username> --password <esxi_password>"]
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
#  VMWare Virtualization Driver Manager sample configuration
#-------------------------------------------------------------------------------
   VM_MAD = [
      name       = "vmm_vmware",
      executable = "one_vmm_vmware",
      arguments  = "--username <esxi_username> --password <esxi_password>",
      type       = "xml" ]
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
#  VMWare Transfer Driver Manager sample configuration
#-------------------------------------------------------------------------------
TM_MAD = [
    name       = "tm_vmware",
    executable = "one_tm",
    arguments  = "tm_vmware/tm_vmware.conf" ]
#-------------------------------------------------------------------------------

Driver Configuration

Drivers rely on environmental variables to gather necessary information to access and manage the VMware hypervisors. This can be set in the shell session scope (i.e. in the .bashrc of oneadmin) or in the rc files of each driver. The needed variables are:

  • VMWARE_TRUSTORE : Must point to the vmware.keystore file. (Needed by the information and virtualization drivers)
  • VMWARE_DATASTORE: Name of the datastore shared by all the hypervisors. (Needed by the virtualization driver)
  • VMWARE_DATACENTER: Name of the datacenter. This name has to be shared between all the hypervisor. By defaut it is ha-datacenter. (Needed by the virtualization driver)
  • DATASTORE_PATH: Path to the exported NFS share that the hypervisors mount as their dastore. This is the local path in the OpenNebula front-end. (Needed by the transfer driver).

A sample rc configuration file for the virtualization driver ($ONE_LOCATION/etc/vmm_vmware/vmm_vmwarerc) follows:

# This must point to a java keystore with appropriate certificates for accessing
# all the ESXi hosts
VMWARE_TRUSTORE=~/.vmware-certs/vmware.keystore

# Uncomment the following line to active MAD debug  
# ONE_MAD_DEBUG=1

# Datastore name
VMWARE_DATASTORE=datastore1

# Datacenter name
VMWARE_DATACENTER=ha-datacenter

Also, a Virtual Machine port group defined within a Virtual Switch attached to a physical device needs to be configured using the VIC (Configuration tab, Networking link in the left panel, Add Networking in the top right). Its name has to match the BRIDGE name of the OpenNebula Virtual Network defined to provide configuration to the VMs. More information about Virtual Networks in OpenNebula here

VMware Template Attributes

Relevant attributes in the Virtual Machine template for VMware are:

  • Name: Name of the Virtual Machine.
  • Memory: Expressed in Mb.
  • CPU: Number of virtual CPUs that this VM will have assigned.
  • NIC: Which Virtual Network to connect to, or direct set of the MAC address. All VM Virtual Ethernet Cards (defined when the VM was created using the VIC) will be erased, and one or more Virtual Ethernet Cards will be added (one per NIC section). If no NIC section is present, previously defined Virtual Ethernet Cards won't be erased.
  • DISKS: Only one disk is needed (and compulsory). Its source MUST point to the folder containing the VM in the datastore. It has to be a local path in the OpenNebula front-end. Additionally, the CLONE and SAVE flags will have effect on the whole of the Virtual Machine.

Example:

NAME=VMwareVM
MEMORY=256
CPU=1

NIC=[NETWORK="VMWareNet"]

DISK=[ source="/images/vmware/myVM",
       clone="yes",
       save="no"]

Using the VMware Driver

In order to use a Virtual Machine with VMware hypervisors within OpenNebula it first needs to be created with the VIC in the shared datastore. To avoid security risks and enable Cloning and Saving capabilities, we recommend changing ownership of every file in the VM folder to the <oneadmin> user:

<xterm> $ chown -R <oneadmin> <path-to-the-VM-folder> </xterm>

To avoid questions asked by the VMware hypervisors about the VM's uuid when they are moved around, the vmx file can be edited and the following added to it:

<xterm> uuid.action = “create” </xterm>

Also, to allow for any MAC address to be set for the VMs, the vmx has to include:

<xterm> ethernet0.checkMACAddress = “FALSE” </xterm>

:!: Alternatively to the consolidated shared filesystem approach, VMs can be stagged to VMware hypervisors using ssh. See here for details on the Vmware ssh transfer driver.

:!: Adding a host using “onehost create” must refer to the host's FQDN, or if this doesn't work, the exact name that is shown in VMWare Web UI.