VMware Drivers 4.4
The VMware Drivers enable the management of an OpenNebula cloud based on VMware ESX and/or VMware Server hypervisors. They use libvirt
and direct API calls using RbVmomi
to invoke the Virtual Infrastructure SOAP API exposed by the VMware hypervisors, and feature a simple configuration process that will leverage the stability, performance and feature set of any existing VMware based OpenNebula cloud.
In order to use the VMware Drivers, some software dependencies have to be met:
DATASTORE
is needed, and it is explained in the TM part of the Configuration section.Optional Requirements. To enable some OpenNebula features you may need:
The creation of a user in the VMware hypervisor is recommended. Go to the Users & Group tab in the VI Client, and create a new user (for instance, “oneadmin”) with the same UID and username as the oneadmin user executing OpenNebula in the front-end. Please remember to give full permissions to this user (Permissions tab).
$ chmod g+w /vmfs/volumes/<ds_id>
in the ESX host
SSH access from the front-end to the ESX hosts is required (or, at least, they need it to unlock all the functionality of OpenNebula). to ensure so, please remember to click the “Grant shell access to this user” checkbox when creating the oneadmin
user.
The access via SSH needs to be passwordless. Please follow the next steps to configure the ESX node:
<xterm> $ su - $ mkdir /etc/ssh/keys-oneadmin $ chmod 755 /etc/ssh/keys-oneadmin $ su - oneadmin $ vi /etc/ssh/keys-oneadmin/authorized_keys <paste here the contents of the oneadmin's front-end account public key (FE → $HOME/.ssh/id_{rsa,dsa}.pub) and exit vi> $ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys </xterm>
More information on passwordless ssh connections here.
<xterm> $ su $ chmod +s /sbin/vmkfstools </xterm>
<xterm> $ su $ chmod +s /bin/vim-cmd </xterm>
<xterm> $ su $ chmod +s /sbin/esxcfg-vswitch </xterm>
Persistency of the ESX filesystem has to be handled with care. Most of ESX 5 files reside in a in-memory filesystem, meaning faster access and also non persistency across reboots, which can be inconvenient at the time of managing a ESX farm for a OpenNebula cloud.
Here is a recipe to make the configuration needed for OpenNebula persistent across reboots. The changes need to be done as root.
<xterm> # vi /etc/rc.local
## Add this at the bottom of the file
mkdir /etc/ssh/keys-oneadmin cat > /etc/ssh/ssh-oneadmin/authorized_keys « _SSH_HEYS_ ssh-rsa <really long string with oneadmin's ssh public key> _SSH_KEYS_ chmod 600 /etc/ssh/keys-oneadmin/authorized_keys chmod +s /sbin/vmkfstools /bin/vim-cmd chmod 755 /etc/ssh/keys-oneadmin chown oneadmin /etc/ssh/keys-oneadmin/authorized_keys
# /sbin/auto-backup.sh </xterm>
This information was based on this blog post.
There are additional configuration steps regarding storage. Please refer to the VMware Storage Model guide for more details.
Networking can be used in two different modes: pre-defined (to use pre-defined port groups) or dynamic (to dynamically create port groups and VLAN tagging). Please refer to the VMware Networking guide for more details.
In order to access running VMs through VNC, the ESX host needs to be configured beforehand, basically to allow VNC inbound connections via their firewall. To do so, please follow this guide.
/etc/one/oned.conf
file.#------------------------------------------------------------------------------- # VMware Virtualization Driver Manager Configuration #------------------------------------------------------------------------------- VM_MAD = [ name = "vmware", executable = "one_vmm_sh", arguments = "-t 15 -r 0 vmware -s sh", default = "vmm_exec/vmm_exec_vmware.conf", type = "vmware" ] #------------------------------------------------------------------------------- # VMware Information Driver Manager Configuration #------------------------------------------------------------------------------- IM_MAD = [ name = "vmware", executable = "one_im_sh", arguments = "-c -t 15 -r 0 vmware" ] #------------------------------------------------------------------------------- SCRIPTS_REMOTE_DIR=/tmp/one
The configuration attributes for the VMware drivers are set in the /etc/one/vmwarerc
file. In particular the following values can be set:
SCHEDULER OPTIONS | DESCRIPTION |
---|---|
:libvirt_uri | used to connect to VMware through libvirt. When using VMware Server, the connection string set under LIBVIRT_URI needs to have its prefix changed from esx to gsx |
:username | username to access the VMware hypervisor |
:password | password to access the VMware hypervisor |
:datacenter | (only for vMotion) name of the datacenter where the hosts have been registered. |
:vcenter | (only for vMotion) name or IP of the vCenter that manage the ESX hosts |
Example of the configuration file:
:libvirt_uri: "esx://@HOST@/?no_verify=1&auto_answer=1" :username: "oneadmin" :password: "mypass" :datacenter: "ha-datacenter" :vcenter: "London-DC"
The physical hosts containing the VMware hypervisors need to be added with the appropriate VMware Drivers. If the box running the VMware hypervisor is called, for instance, esx-host, the host would need to be registered with the following command (dynamic netwotk mode):
$ onehost create esx-host -i vmware -v vmware -n vmware
or for pre-defined networking
$ onehost create esx-host -i vmware -v vmware -n dummy
To register an existing VMware disk in an OpenNebula image catalog you need to:
NAME = MyVMwareDisk PATH =/absolute/path/to/disk/folder TYPE = OS
Once registered the image can be used as any other image in the OpenNebula system as described in the Virtual Machine Images guide.
Datablock images and volatile disks will appear as a raw devices on the guest, which will then need to be formatted. The FORMAT attribute is compulsory, possible values (more info on this here) are:
The following attributes can be used for VMware Virtual Machines:
<xterm> OS=[GUESTOS=<os-identifier] </xterm>
<xterm> FEATURES=[PCIBRIDGE=<bridge-number>] </xterm>
You can add metadata straight to the .vmx file using RAW/DATA_VMX. This comes in handy to specify for example a specific guestOS type, more info here.
Following the two last sections, if we want a VM of guestOS type “Windows 7 server 64bit”, with disks plugged into a LSI SAS SCSI bus, we can use a template like:
NAME = myVMwareVM CPU = 1 MEMORY = 256 DISK = [IMAGE_ID="7"] NIC = [NETWORK="public"] RAW=[ DATA="<devices><controller type='scsi' index='0' model='lsisas1068'/></devices>", DATA_VMX="pciBridge0.present = \"TRUE\"\npciBridge4.present = \"TRUE\"\npciBridge4.virtualDev = \"pcieRootPort\"\npciBridge4.functions = \"8\"\npciBridge5.present = \"TRUE\"\npciBridge5.virtualDev = \"pcieRootPort\"\npciBridge5.functions = \"8\"\npciBridge6.present = \"TRUE\"\npciBridge6.virtualDev = \"pcieRootPort\"\npciBridge6.functions = \"8\"\npciBridge7.present = \"TRUE\"\npciBridge7.virtualDev = \"pcieRootPort\"\npciBridge7.functions = \"8\"\nguestOS = \"windows7srv-64\"", TYPE="vmware" ]
The VMware Drivers consists of three drivers, with their corresponding files:
/var/lib/one/remotes/vmm/vmware
: commands executed to perform actions./var/lib/one/remotes/im/vmware.d
: vmware IM probes./usr/lib/one/tm_commands
: commands executed to perform transfer actions.And the following driver configuration files:
/etc/one/vmm_exec/vmm_exec_vmware.conf
: This file is home for default values for domain definitions (in other words, OpenNebula templates). For example, if the user wants to set a default value for CPU requirements for all of their VMware domain definitions, simply edit the /etc/one/vmm_exec/vmm_exec_vmware.conf
file and set a CPU=0.6
into it. Now, when defining a template to be sent to a VMware resource, the user has the choice of “forgetting” to set the CPU requirement, in which case it will default to 0.6.
It is generally a good idea to place defaults for the VMware-specific attributes, that is, attributes mandatory for the VMware hypervisor that are not mandatory for other hypervisors. Non mandatory attributes for VMware but specific to them are also recommended to have a default.
/etc/one/tm_vmware/tm_vmware.conf
: This files contains the scripts tied to the different actions that the TM driver can deliver. You can here deactivate functionality like the DELETE action (this can be accomplished using the dummy tm driver, dummy/tm_dummy.sh) or change the default behavior.More generic information about drivers: