The VMware Storage Model 4.0

Overview

There are two possibilities at the time of designing the storage model for your cloud when using VMware hypervisors: NFS Datastores and VMFS Datastores. To configure them, it is important to keep in mind that there are (at least) two datastores to define, the system datastore (where the running VMs and their images reside, only need transfer manager drivers) and the images datastore (where the images are stored, needs both datastore and transfer manager drivers).

inlinetoc

Using NFS Datastores

In the NFS Datastores model, both the image and the system datastore are exported from a NFS server accesible both from the front-end (which can be the NFS server itself) and the ESX hosts.

  • The front-end will mount or export the whole of /var/lib/one/datastore
  • The ESX hosts will mount each datastore separately

Requirements

OpenNebula Front-end

Most of the NFS datastore operations use standard filesystem commands, with the exception of:

  • qemu-img tool: needed for the creation of DATABLOCKS

Description

This scenario implies that both the system and the image datastores resides on a NAS, exported through a distributed file system type like for instance NFS.

Both the OpenNebula front-end and the ESX servers needs to mount both the images and the system datastore.

  • Advantages: Persistent images through linking, fast VM startup
  • Disadvantages: No data locality, VMs I/O through NAS

Configuration

Scenario Configuration

  • The OpenNebula front-end needs to mount /var/lib/one/datastores/
    • or, alternatively, the image and system datastore(s) as /var/lib/one/datastores/<datastore-id>.
  • The ESX servers need to mount as a NFS datastore both the system datastore and the image datastore, naming them <datastore-id>, for instance 0 -system datastore- and 1 -image datastore-). See the shared transfer manager configuration for more details on how to configure this.
  • passwordless SSH connection between the front-end and the ESX servers, under the “oneadmin” account.

:!: The system datastore can be other than the default one (““0””). In this case, the ESX will need to mount the datastore with the same id as the datastores has in OpenNebula. More details in the System Datastore Guide.

OpenNebula Configuration

The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:

Datastore DS Drivers TM Drivers
System - shared
Images vmware shared

System Datastore [shared]

Shared transfer drivers are the default on a fresh OpenNebula install for the system datastore. There is no need to configure datastore drivers for the system datastore.

Images Datastore [vmware, shared]

The image datastore needs to be updated to use vmware drivers for the datastore drivers, and shared drivers for the transfer manager drivers.

  • the default datastore can be updated:

<xterm> $ onedatastore update 1 DS_MAD=vmware TM_MAD=shared </xterm>

  • or a new one created. The first step to create a datastore is to set up a template file for it. In the following table you can see the supported configuration attributes.

<xterm> $ onedatastore create <datastore.tmpl> </xterm>

where <datastore.tmpl> is a file that can contain a line (Attribute=Value) for each of the following attributes:

Attribute Description
NAME The name of the datastore
DS_MAD The DS type, use vmware or vmfs
TM_MAD Transfer drivers for the datastore: shared, ssh or vmfs, see below
DISK_TYPE Type for the VM disks using images from this datastore. Supported values are: block, file
RESTRICTED_DIRS Paths that can not be used to register images. A space separated list of paths. :!:
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space separated list of paths.
UMASK Default mask for the files created in the datastore. Defaults to 0007
NO_DECOMPRESS Do not try to untar or decompress the file to be registered. Useful for specialized Transfer Managers

Transfer Manager driver configuration [shared]

For a general description of how this transfer mechanism works, please refer to the Filesystem Datastore guide.

If we are planing to use a NAS, the datastore should be exported with the appropriate flags so the files created by the VMware hypervisor can be managed by OpenNebula. An example of a configuration line in /etc/exports of the NFS server:

/var/lib/one 192.168.1.0/24(rw,sync,no_subtree_check,root_squash,anonuid=9001,anongid=9001)

where 9001 is the UID and GID of “oneadmin” user in the front-end.

Host Configuration

For a ESX server to mount a datastore it needs to use the <datastore_id> as datastore name. For example to make use of the (vmfs or vmware) datastore 100, exported by the server using NFS nas.opennebula.org you should add network filesystem as:

Datastore driver configuration [vmware]

The vmware datastore drivers lets you store VM images to be used with the VMware hypervisor family. It is an specialized version of the Filesystem datastore that deals with the vmdk format.

Using VMFS Datastores

Requirements

OpenNebula Front-end

In order to use the VMFS datastore in non SSH-mode, the OpenNebula front-end needs to have the vSphere CLI installed.

VMware ESX servers

In order to use the VMFS datastore in SSH-mode, the ESX servers need to have the SSH access configured for the oneadmin account.

If the VMFS volumes are exported through a SAN, it should be accesible and configured so the ESX server can mount the iSCSI export.

Description

In this scenario all the volumes implied in the image staging are purely VMFS volumes, taking full advantage of the VMware filesystem (VM image locking and improved performance). This scenario implies that both the system and the image datastores resides on a SAN, exported through iSCSI to the ESX servers.

  • Advantages: Data locality, VMs I/O locally. Pure VMFS system (improved performance)
  • Disadvantages: Persistent images through copy, slow VM start

Configuration

Scenario Configuration

  • The OpenNebula front-end doesn't need to mount any datastore.
  • The ESX servers needs to present or mount as iSCSI both the system datastore and the image datastore (naming them <datastore-id>, for instance 0 -system datastore- and 1 -image datastore-).

:!: The system datastore can be other than the default one (““0””). In this case, the ESX will need to mount the datastore with the same id as the datastores has in OpenNebula. More details in the System Datastore Guide.

OpenNebula Configuration

The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:

Datastore DS Drivers TM Drivers
System - vmfs
Images vmfs vmfs

The datastore location on ESX hypervisors is “/vmfs/volumes”. There are two choices:

  • In homogeneous clouds (all the hosts are ESX) set the following in /etc/one/oned.conf 'DATASTORE_LOCATION=/vmfs/volumes'.
  • In heterogeneous clouds (mix of ESX and other hypervisor hosts) put all the ESX hosts in clusters with a 'DATASTORE_LOCATION=/vmfs/volumes' in their template.

System Datastore [vmfs]

vmfs drivers: the system datastore needs to be updated in OpenNebula (onedatastore update 0) to set the TM_MAD drivers to vmfs. There is no need to configure datastore drivers for the system datastore.

Images Datastore [vmfs,vmfs]

The image datastore needs to be updated to use vmfs drivers for the datastore drivers, and vmfs drivers for the transfer manager drivers.

  • the default datastore can be updated:

<xterm> $ onedatastore update 1 DS_MAD=vmware TM_MAD=shared </xterm>

  • or a new one created. The first step to create a datastore is to set up a template file for it. In the following table you can see the supported configuration attributes.

<xterm> $ onedatastore create <datastore.tmpl> </xterm>

where <datastore.tmpl> is a file that can contain a line (Attribute=Value) for each of the following attributes:

Attribute Description
NAME The name of the datastore
DS_MAD The DS type, use vmware or vmfs
TM_MAD Transfer drivers for the datastore: shared, ssh or vmfs, see below
DISK_TYPE Type for the VM disks using images from this datastore. Supported values are: block, file
RESTRICTED_DIRS Paths that can not be used to register images. A space separated list of paths. :!:
SAFE_DIRS If you need to un-block a directory under one of the RESTRICTED_DIRS. A space separated list of paths.
UMASK Default mask for the files created in the datastore. Defaults to 0007
BRIDGE_LIST Space separated list of ESX servers that are going to be used as proxies to stage images into the datastore (vmfs datastores only)
TM_USE_SSH “yes” or “no”. Whether the vmfs TMs will use ssh or not respectively (vmfs TMs only). Defaults to the value in /var/lib/one/remotes/tm/vmfs/tm_vmfs.conf.
DS_USE_SSH “yes” or “no”. Whether the vmfs datastore drivers will use ssh or not respectively (vmfs datastores only). Defaults to the value in /var/lib/one/remotes/datastore/vmfs/vmfs.conf.
DS_TMP_DIR Path in the OpenNebula front-end to be used as a buffer to stage in files in vmfs datastores. Defaults to the value in /var/lib/one/remotes/datastore/vmfs/vmfs.conf.
NO_DECOMPRESS Do not try to untar or decompress the file to be registered. Useful for specialized Transfer Managers

:!: This will prevent users registering important files as VM images and accessing them through their VMs. OpenNebula will automatically add its configuration directories: /var/lib/one, /etc/one and oneadmin's home. If users try to register an image from a restricted directory, they will get the following error message: “Not allowed to copy image file”.

Transfer Manager [vmfs]

These drivers can work in two modes, triggering the events remotely using the exposed VI API (requires installing the vSphere CLI in the front-end (non SSH-mode), or doing this through an ssh channel (SSH-mod'). The method is chosen in the datastore template, and the default value (in case it is not present in the datastore template) can be changed in /var/lib/one/remotes/tm/vmfs/tm_vmfs.conf file (TM_USE_SSH attribute)

The vmfs drivers are a specialization of the shared drivers to work with the VMware vmdk filesystem tools. The same features/restrictions and configuration applies so be sure to read the shared driver section.

The difference with the shared drivers is that the vmkfstool command is used, which specializes in VMFS voumes. This comes with a number of advantages, like FS locking, easier VMDK cloning, format management, etc.

Datastore driver configuration [vmfs]

The vmfs datastore drivers allows the use of the VMware VM filesystem, which handles VM file locks and also boosts I/O performance.

  • This datastore can work in two modes, triggering the events remotely using the exposed VI API (requires installing the vSphere CLI in the front-end (non SSH-mode), or doing this through an ssh channel (SSH-mode). The method is chosen in the datastore template (see the table below), and its defaults can be changed in the/var/lib/one/remotes/datastore/vmfs/vmfs.conf file.
  • Another important aspect to correctly configure a vmfs datastore set of drivers is to chose the ESX bridges, i.e., the ESX serves that are going to be used as proxies to stage images into the vmfs datastore. A list of bridges must be defined with the BRIDGE_LIST attribute of the datastore template (see the table below). The drivers will pick one ESX server from that list in a round robin fashion.
  • The vmfs datastore needs to use the front-end as a buffer for the image staging in some cases, this buffer can be set in the DS_TMP_DIR attribute.

Tuning and Extending

Drivers can be easily customized please refer to the specific guide for each datastore driver or to the Storage substystem developer's guide.

However you may find the files you need to modify here:

  • /var/lib/one/remotes/datastore/<DS_DRIVER>
  • /var/lib/one/remotes/tm/<TM_DRIVER>