The VMware Storage Model 3.8

There are several possibilities at the time of designing the storage model for your cloud when using VMware hypervisors. OpenNebula ships with two sets of datastore drivers out-of-the-box, and with three sets of transfer manager drivers. A datastore is defined by the conjunction of a datastore set of drivers and a transfer manager set, opening a broad range of different possible configurations.

inlinetoc

Requirements

OpenNebula Front-end

Most of the datastore operations are performed remotely on the ESX servers, but there are some actions that are performed in the OpenNebula front-end. Most of these operations use standard filesystem commands, with the exception of:

  • qemu-img tool: needed for the creation of datablocks

In order to use the VMFS datastore in non SSH-mode, the OpenNebula front-end needs to have the vSphere CLI installed.

VMware ESX servers

In order to use the VMFS datastore in SSH mode, the ESX servers need to have the SSH access configured for the oneadmin account. There are no other requirements for any of the transfer methods described below.

Configuration

The storage model for VMware for OpenNebula clouds envisions three possible scenarios, depending on the method using for staging images and the locality of the data. Each scenario requires different configurations in terms of drivers of the datastores and ESX configuration. Please take into account the difference between the system datastore (where the running VMs and their images reside) and the images datastore (where the images are registered to conform a catalog to be used to later on conform a VM).

The following scenarios are the recommended and more general ones, prone to satisfy the majority of requirements. This doesn't mean that there are the only possibilities that can be achieved by different combinations of datastore and transfer manager drivers.

:!: The datastore location on ESX hypervisors is “/vmfs/volumes”. There are two choices:

  • In homogeneous clouds (all the hosts are ESX) set the following in /etc/one/oned.conf 'DATASTORE_LOCATION=/vmfs/volumes'.
  • In heterogeneous clouds (mix of ESX and other hypervisor hosts) put all the ESX hosts in clusters with a 'DATASTORE_LOCATION=/vmfs/volumes' in their template.

Scenario 1: Distributed File System

This scenario implies that both the system and the image datastores resides on a NAS, exported through a distributed file system type like for instance NFS.

Both the OpenNebula front-end and the ESX servers needs to mount both the images and the system datastore.

  • Advantages: Persistent images through linking, fast VM startup
  • Disadvantages: No data locality, VMs I/O through NAS

Scenario Configuration

  • The OpenNebula front-end needs to mount the image datastore(s) as /var/lib/one/datastores/<datastore-id>.
  • The ESX servers needs to mount as a NFS datastore both the system datastore (naming it 0) and the image datastore (naming it <datastore-id>). They also need a passwordless SSH connection between the front-end and the ESX servers, under the “oneadmin” account.

OpenNebula Configuration

The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:

Datastore DS Drivers TM Drivers
System - shared
Images vmware shared

Scenario 2: Local System Datastore

This scenario implies that the system datastore is located locally for each ESX server, while the image datastores resides in a SAN server exporting a VMFS volume (scenario 2.1), or it resides on a NAS, which in turn can be exported through a distributed file system type like for instance NFS (scenario 2.2), or images can be staged remotely in the ESX server with ssh connections (scenario 2.3).

The ESX servers needs to mount the image datastore either as iSCSI (2.1) or through NFS (2.2). In scenario 2.3 the image datastore doesn't need to be mounted in the ESX server.

  • Advantages: Data locality, VMs I/O locally
  • Disadvantages: Persistent images through copy, slow VM start

Scenario Configuration

  • The OpenNebula front-end needs to mount the image datastore as a NFS import in scenarios 2.2 and 2.3. Scenario 2.1 doesn't require the front-end mounting the image datastore, but OpenNebula needs to be configured as per the table below.
  • The ESX servers needs to mount as a NFS (2.2 and 2.3) or iSCSI (2.1) datastore both the system datastore (naming it 0) and the image datastore (naming it <datastore-id>).

OpenNebula Configuration

The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:

Datastore DS Drivers TM Drivers
System - ssh
Images 2.1 vmfs vmfs
Images 2.2 vmware shared
Images 2.3 vmware ssh

Scenario 3: Pure VMFS

In this scenario all the volumes implied in the image staging are purely VMFS volumes, taking full advantage of the VMware filesystem (VM image locking and improved performance). This scenario implies that both the system and the image datastores resides on a SAN, exported through iSCSI to the ESX servers.

  • Advantages: Data locality, VMs I/O locally. Pure VMFS system (improved performance)
  • Disadvantages: Persistent images through copy, slow VM start

Scenario Configuration

  • The OpenNebula front-end doesn't need to mount any datastore.
  • The ESX servers needs to mount as iSCSI both the system datastore (naming it 0) and the image datastore (naming it <datastore-id>).

OpenNebula Configuration

The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:

Datastore DS Drivers TM Drivers
System - vmfs
Images vmfs vmfs

Configuring the Transfer Manager Drivers

There are three possible choices of transfer drivers for the system datastore:

  • shared drivers: this is default value on a fresh OpenNebula install for the system datastore. The only needed configuration is to mount the system datastore exported by the front-end on all the ESX nodes. The created ESX datastore needs to mount the system datastore (/var/lib/one/datastores/0), under the name “0” (see note below if you are planning to use a non default system datastore).
  • ssh drivers: the system datastore needs to be updated in OpenNebula (onedatastore update 0) to set the TM_MAD drivers to ssh. The ESX nodes need to mount a local datastore with the name “0” (see note below if you are planning to use a non default system datastore).
  • vmfs drivers: the system datastore needs to be updated in OpenNebula (onedatastore update 0) to set the TM_MAD drivers to vmfs. This drivers can work in two modes, triggering the events remotely using the exposed VI API (requires installing the vSphere CLI in the front-end (non SSH mode), or doing this through an ssh channel (SSH-mode). The method is chosen in the datastore template (see Configuring for datastore drivers for VMware ), and the default value (in case it is not present in the datastore template) can be changed in /var/lib/one/remotes/tm/vmfs/tm_vmfs.conf file (TM_USE_SSH attribute). The ESX nodes need to mount a local or remote VMFS datastore with the name “0” (see note below if you are planning to use a non default system datastore).

:!: The system datastore can be other than the default one (““0””). In this case, the ESX will need to mount the datastore with the same id as the datastores has in OpenNebula!. More details in the System Datastore Guide.

Using the shared transfer driver

For a general description of how this transfer mechanism works, please refer to the Filesystem Datastore guide.

If we are planing to use a NAS, the datastore should be exported with the appropriate flags so the files created by the VMware hypervisor can be managed by OpenNebula. An example of a configuration line in /etc/exports of the NFS server:

/var/lib/one 192.168.1.0/24(rw,sync,no_subtree_check,root_squash,anonuid=9001,anongid=9001)

where 9001 is the UID and GID of “oneadmin” user in the front-end.

In the case if a SAN, it should be accesible and configured so the ESX server can mount the iSCSI export.

Host Configuration

For a ESX server to mount a datastore it needs to use the <datastore_id> as datastore name. For example to make use of the (vmfs or vmware) datastore 100, exported by the server using NFS san.opennebula.org you should add network filesystem as:

Using the SSH Transfer Driver

Since the VMware virtualization drivers require passwordless ssh connection from the front-end to the ESX nodes using the oneadmin user (see this for more information), using a SSH Transfer Driver based vmware or vmfs datastore.

Using the VMFS transfer driver

The VMFS drivers are a specialization of the shared drivers to work with the VMware vmdk filesystem tools. The same features/restrictions and configuration applies so be sure to read the shared driver section.

The difference with the shared drivers is that the “vmkfstool” command is used, which specializes in VMFS voumes. This comes with a number of advantages, like FS locking, easier VMDK cloning, format management, etc.

Configuring the Datastore drivers for VMware

The first step to create a datastore is to set up a template file for it. In the following table you can see the supported configuration attributes. The datastore type is set by its drivers, and there are two choices for VMware clouds managed by OpenNebula:

  • The vmware datastore drivers lets you store VM images to be used with the VMware hypervisor family. Thevmware datastore is an specialized version of the Filesystem datastore that deals with the vmdk format. Considerations are also need to be taken into account regarding the system datastore, since compatibility must be ensured between the system datastore and all the registered datastores.
  • The vmfs datastore drivers allows the use of the VMware VM filesystem, which handles VM file locks and also boosts I/O performance.
    • This datastore can work in two modes, triggering the events remotely using the exposed VI API (requires installing the vSphere CLI in the front-end (non SSH-mode), or doing this through an ssh channel (SSH-mode). The method is chosen in the datastore template (see the table below), and its defaults can be changed in the/var/lib/one/remotes/datastore/vmfs/vmfs.conf file.
    • Another important aspect to correctly configure a vmfs datastore set of drivers is to chose the ESX bridges, i.e., the ESX serves that are going to be used as proxies to stage images into the vmfs datastore. A list of bridges must be defined with the BRIDGE_LIST attribute of the datastore template (see the table below). The drivers will pick one ESX server from that list in a round robin fashion.
    • The vmfs datastore needs to use the front-end as a buffer for the image staging in some cases, this buffer can be set in the DS_TMP_DIR attribute.

Attribute Description
NAME The name of the datastore
DS_MAD The DS type, use vmware or vmfs
TM_MAD Transfer drivers for the datastore: shared, ssh or vmfs, see below
DISK_TYPE Type for the VM disks using images from this datastore. Supported values are: block, file
RESTRICTED_DIRS Paths that can not be used to register images. A space separated list of paths. :!:
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space separated list of paths.
UMASK Default mask for the files created in the datastore. Defaults to 0007
BRIDGE_LIST Space separated list of ESX servers that are going to be used as proxies to stage images into the datastore (vmfs datastores only)
TM_USE_SSH “yes” or “no”. Whether the vmfs TMs will use ssh or not respectively (vmfs TMs only). Defaults to the value in /var/lib/one/remotes/tm/vmfs/tm_vmfs.conf.
DS_USE_SSH “yes” or “no”. Whether the vmfs datastore drivers will use ssh or not respectively (vmfs datastores only). Defaults to the value in /var/lib/one/remotes/datastore/vmfs/vmfs.conf.
DS_TMP_DIR Path in the OpenNebula front-end to be used as a buffer to stage in files in vmfs datastores. Defaults to the value in /var/lib/one/remotes/datastore/vmfs/vmfs.conf.
NO_DECOMPRESS Do not try to untar or decompress the file to be registered. Useful for specialized Transfer Managers

:!: This will prevent users registering important files as VM images and accessing them through their VMs. OpenNebula will automatically add its configuration directories: /var/lib/one, /etc/one and oneadmin's home. If users try to register an image from a restricted directory, they will get the following error message: “Not allowed to copy image file”.

For example, the following illustrates the creation of a vmware datastore using the shared transfer drivers. <xterm>

cat ds.conf

NAME = production DS_MAD = vmware TM_MAD = shared

onedatastore create ds.conf

ID: 100

onedatastore list
ID NAME            CLUSTER  IMAGES TYPE   TM    
 0 system          none     0      fs     shared
 1 default         none     3      fs     shared

100 production none 0 vmware shared

</xterm>

You can check more details of the datastore by issuing the onedatastore show command.

Finally, you have to prepare the storage for the datastore and configure the hosts to access it. This depends on the transfer mechanism you have chosen for your datastore.

:!: Note that datastores are not associated to any cluster by default, and they are supposed to be accessible by every single host. If you need to configure datastores for just a subset of the hosts take a look to the Cluster guide.

Tuning and Extending

Drivers can be easily customized please refer to the specific guide for each datastore driver or to the Storage substystem developer's guide.

However you may find the files you need to modify here:

  • /var/lib/one/remotes/datastore/<DS_DRIVER>
  • /var/lib/one/remotes/tm/<TM_DRIVER>