VMware Driver Guide 1.4
The VMware Infrastructure API (VI API) provides a complete set of language-neutral interfaces to the VMware virtual infrastructure management framework. By targeting the VI API, the OpenNebula VMware drivers are able to manage various flavors of VMware hypervisors: ESXi (free), ESX and VMware Server.
The front-end where OpenNebula is installed needs the following software:
Besides, a VMware hypervisor is needed in the cluster nodes. You can choose and install one of the following hypervisors:
All the VMware hypervisors (that will be accessed by the same set of drivers) needs to share the same user with the same password. This user can be created through the VMware Infrastructure Client (VIC), in the User & Groups
. This user needs to have the same UID as the oneadmin user has in the OpenNebula front-end, and this can be set through the VIC as well.
The default recommended configuration of the storage for VMware hypervisors is to use a shared filesystem between them, ideally NFS
. This share has to be accessible to the OpenNebula front-end. Its name is identified by $DATASTORE (all the hypervisors have to mount the share with the same name), and its location in the OpenNebula front-end by the environmental variable $DATASTORE_PATH.
This configuration can be set through the VIC as well, Configuration
tab, Storage link in the left panel, “Add Storage” in the top right. You will need a NFS
export with the “no_root_squash” option enabled (the VIC needs root permission to create and manage VMs).
With respect to the front-end, the following steps need to be taken in order to configure properly the drivers:
/etc/vmware/ssl/rui.crt
of all the hypervisors and add them to the OpenNebula front-end java keystore with a command like:<xterm> keytool -import -file <certificate-filename> -alias <esx-server-name> -keystore vmware.keystore </xterm>
<xterm> $ cd src/vmm_mad/vmware $ ./install-vmware.sh </xterm>
The drivers consists of the following files:
$ONE_LOCATION/lib/mads/*.class
: Driver libraries$ONE_LOCATION/bin/one_vmm_vmware
: Wrapper for VMware Virtual Machine Manager driver$ONE_LOCATION/etc/vmm_vmware/vmm_vmwarerc
: environment setup. Also useful for setting whether the driver should be verbose in the log entries. $ONE_LOCATION/lib/mads/*.class
: Driver libraries$ONE_LOCATION/bin/one_im_vmware
: Wrapper for VMware Information Manager driver$ONE_LOCATION/etc/im_vmware/im_vmwarerc
: environment setup. Also useful for setting whether if the driver should be verbose in the log entries.
OpenNebula needs to be told how to run the drivers. The place to do so is the $ONE_LOCATION/etc/oned.conf
, it needs to have the VMware transfer, information and virtualization drivers set like the following lines:
#------------------------------------------------------------------------------- # VMWare Information Driver Manager sample configuration #------------------------------------------------------------------------------- IM_MAD = [ name = "im_vmware", executable = "one_im_vmware", arguments = "--username <esxi_username> --password <esxi_password>"] #-------------------------------------------------------------------------------
#------------------------------------------------------------------------------- # VMWare Virtualization Driver Manager sample configuration #------------------------------------------------------------------------------- VM_MAD = [ name = "vmm_vmware", executable = "one_vmm_vmware", arguments = "--username <esxi_username> --password <esxi_password>", type = "xml" ] #-------------------------------------------------------------------------------
#------------------------------------------------------------------------------- # VMWare Transfer Driver Manager sample configuration #------------------------------------------------------------------------------- TM_MAD = [ name = "tm_vmware", executable = "one_tm", arguments = "tm_vmware/tm_vmware.conf" ] #-------------------------------------------------------------------------------
Drivers rely on environmental variables to gather necessary information to access and manage the VMware hypervisors. This can be set in the shell session scope (i.e. in the .bashrc of oneadmin) or in the rc files of each driver. The needed variables are:
VMWARE_TRUSTORE
: Must point to the vmware.keystore file. (Needed by the information and virtualization drivers)VMWARE_DATASTORE
: Name of the datastore shared by all the hypervisors. (Needed by the virtualization driver)VMWARE_DATACENTER
: Name of the datacenter. This name has to be shared between all the hypervisor. By defaut it is ha-datacenter
. (Needed by the virtualization driver)DATASTORE_PATH
: Path to the exported NFS
share that the hypervisors mount as their dastore. This is the local path in the OpenNebula front-end. (Needed by the transfer driver).
A sample rc configuration file for the virtualization driver ($ONE_LOCATION/etc/vmm_vmware/vmm_vmwarerc
) follows:
# This must point to a java keystore with appropriate certificates for accessing # all the ESXi hosts VMWARE_TRUSTORE=~/.vmware-certs/vmware.keystore # Uncomment the following line to active MAD debug # ONE_MAD_DEBUG=1 # Datastore name VMWARE_DATASTORE=datastore1 # Datacenter name VMWARE_DATACENTER=ha-datacenter
Also, a Virtual Machine port group defined within a Virtual Switch attached to a physical device needs to be configured using the VIC (Configuration
tab, Networking
link in the left panel, Add Networking
in the top right). Its name has to match the BRIDGE
name of the OpenNebula Virtual Network defined to provide configuration to the VMs. More information about Virtual Networks in OpenNebula here
Relevant attributes in the Virtual Machine template for VMware are:
NIC
section is present, previously defined Virtual Ethernet Cards won't be erased.Example:
NAME=VMwareVM MEMORY=256 CPU=1 NIC=[NETWORK="VMWareNet"] DISK=[ source="/images/vmware/myVM", clone="yes", save="no"]
In order to use a Virtual Machine with VMware hypervisors within OpenNebula it first needs to be created with the VIC in the shared datastore. To avoid security risks and enable Cloning and Saving capabilities, we recommend changing ownership of every file in the VM folder to the <oneadmin> user:
<xterm> $ chown -R <oneadmin> <path-to-the-VM-folder> </xterm>
To avoid questions asked by the VMware hypervisors about the VM's uuid when they are moved around, the vmx
file can be edited and the following added to it:
<xterm> uuid.action = “create” </xterm>
Also, to allow for any MAC address to be set for the VMs, the vmx
has to include:
<xterm> ethernet0.checkMACAddress = “FALSE” </xterm>