The VMware Storage Model 3.8
There are several possibilities at the time of designing the storage model for your cloud when using VMware hypervisors. OpenNebula ships with two sets of datastore drivers out-of-the-box, and with three sets of transfer manager drivers. A datastore is defined by the conjunction of a datastore set of drivers and a transfer manager set, opening a broad range of different possible configurations.
Most of the datastore operations are performed remotely on the ESX servers, but there are some actions that are performed in the OpenNebula front-end. Most of these operations use standard filesystem commands, with the exception of:
In order to use the VMFS datastore in non SSH-mode, the OpenNebula front-end needs to have the vSphere CLI installed.
In order to use the VMFS datastore in SSH mode
, the ESX servers need to have the SSH access configured for the oneadmin account. There are no other requirements for any of the transfer methods described below.
The storage model for VMware for OpenNebula clouds envisions three possible scenarios, depending on the method using for staging images and the locality of the data. Each scenario requires different configurations in terms of drivers of the datastores and ESX configuration. Please take into account the difference between the system datastore
(where the running VMs and their images reside) and the images datastore
(where the images are registered to conform a catalog to be used to later on conform a VM).
The following scenarios are the recommended and more general ones, prone to satisfy the majority of requirements. This doesn't mean that there are the only possibilities that can be achieved by different combinations of datastore and transfer manager drivers.
The datastore location on ESX hypervisors is “/vmfs/volumes”. There are two choices:
This scenario implies that both the system and the image datastores resides on a NAS, exported through a distributed file system type like for instance NFS.
Both the OpenNebula front-end and the ESX servers needs to mount both the images and the system datastore.
Scenario Configuration
/var/lib/one/datastores/<datastore-id>
. OpenNebula Configuration
The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:
This scenario implies that the system datastore is located locally for each ESX server, while the image datastores resides in a SAN server exporting a VMFS volume (scenario 2.1), or it resides on a NAS, which in turn can be exported through a distributed file system type like for instance NFS (scenario 2.2), or images can be staged remotely in the ESX server with ssh connections (scenario 2.3).
The ESX servers needs to mount the image datastore either as iSCSI (2.1) or through NFS (2.2). In scenario 2.3 the image datastore doesn't need to be mounted in the ESX server.
Scenario Configuration
OpenNebula Configuration
The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:
In this scenario all the volumes implied in the image staging are purely VMFS volumes, taking full advantage of the VMware filesystem (VM image locking and improved performance). This scenario implies that both the system and the image datastores resides on a SAN, exported through iSCSI to the ESX servers.
Scenario Configuration
OpenNebula Configuration
The system datastore needs to be updated, and the image datastore needs to be created, with the following drivers:
There are three possible choices of transfer drivers for the system datastore:
/var/lib/one/datastores/0
), under the name “0” (see note below if you are planning to use a non default system datastore).onedatastore update 0
) to set the TM_MAD drivers to ssh
. The ESX nodes need to mount a local datastore with the name “0” (see note below if you are planning to use a non default system datastore).onedatastore update 0
) to set the TM_MAD drivers to vmfs
. This drivers can work in two modes, triggering the events remotely using the exposed VI API (requires installing the vSphere CLI in the front-end (non SSH mode
), or doing this through an ssh channel (SSH-mode
). The method is chosen in the datastore template (see Configuring for datastore drivers for VMware ), and the default value (in case it is not present in the datastore template) can be changed in /var/lib/one/remotes/tm/vmfs/tm_vmfs.conf
file (TM_USE_SSH
attribute). The ESX nodes need to mount a local or remote VMFS datastore with the name “0” (see note below if you are planning to use a non default system datastore).
For a general description of how this transfer mechanism works, please refer to the Filesystem Datastore guide.
If we are planing to use a NAS, the datastore should be exported with the appropriate flags so the files created by the VMware hypervisor can be managed by OpenNebula. An example of a configuration line in /etc/exports
of the NFS server:
/var/lib/one 192.168.1.0/24(rw,sync,no_subtree_check,root_squash,anonuid=9001,anongid=9001)
where 9001 is the UID and GID of “oneadmin” user in the front-end.
In the case if a SAN, it should be accesible and configured so the ESX server can mount the iSCSI export.
Host Configuration
For a ESX server to mount a datastore it needs to use the <datastore_id> as datastore name. For example to make use of the (vmfs or vmware) datastore 100, exported by the server using NFS san.opennebula.org
you should add network filesystem as:
Since the VMware virtualization drivers require passwordless ssh connection from the front-end to the ESX nodes using the oneadmin user (see this for more information), using a SSH Transfer Driver based vmware
or vmfs
datastore.
The VMFS drivers are a specialization of the shared drivers to work with the VMware vmdk filesystem tools. The same features/restrictions and configuration applies so be sure to read the shared driver section.
The difference with the shared drivers is that the “vmkfstool” command is used, which specializes in VMFS voumes. This comes with a number of advantages, like FS locking, easier VMDK cloning, format management, etc.
The first step to create a datastore is to set up a template file for it. In the following table you can see the supported configuration attributes. The datastore type is set by its drivers, and there are two choices for VMware clouds managed by OpenNebula:
vmware
datastore is an specialized version of the Filesystem datastore that deals with the vmdk
format. Considerations are also need to be taken into account regarding the system datastore, since compatibility must be ensured between the system datastore and all the registered datastores./var/lib/one/remotes/datastore/vmfs/vmfs.conf
file. vmfs
datastore set of drivers is to chose the ESX bridges, i.e., the ESX serves that are going to be used as proxies to stage images into the vmfs
datastore. A list of bridges must be defined with the BRIDGE_LIST
attribute of the datastore template (see the table below). The drivers will pick one ESX server from that list in a round robin fashion.vmfs
datastore needs to use the front-end as a buffer for the image staging in some cases, this buffer can be set in the DS_TMP_DIR
attribute.
Attribute | Description |
---|---|
NAME | The name of the datastore |
DS_MAD | The DS type, use vmware or vmfs |
TM_MAD | Transfer drivers for the datastore: shared , ssh or vmfs , see below |
DISK_TYPE | Type for the VM disks using images from this datastore. Supported values are: block , file |
RESTRICTED_DIRS | Paths that can not be used to register images. A space separated list of paths. |
SAFE_DIRS | If you need to un-block a directory under one of the RESTRICTED_DIRS. A space separated list of paths. |
UMASK | Default mask for the files created in the datastore. Defaults to 0007 |
BRIDGE_LIST | Space separated list of ESX servers that are going to be used as proxies to stage images into the datastore (vmfs datastores only) |
TM_USE_SSH | “yes” or “no”. Whether the vmfs TMs will use ssh or not respectively (vmfs TMs only). Defaults to the value in /var/lib/one/remotes/tm/vmfs/tm_vmfs.conf . |
DS_USE_SSH | “yes” or “no”. Whether the vmfs datastore drivers will use ssh or not respectively (vmfs datastores only). Defaults to the value in /var/lib/one/remotes/datastore/vmfs/vmfs.conf . |
DS_TMP_DIR | Path in the OpenNebula front-end to be used as a buffer to stage in files in vmfs datastores. Defaults to the value in /var/lib/one/remotes/datastore/vmfs/vmfs.conf . |
NO_DECOMPRESS | Do not try to untar or decompress the file to be registered. Useful for specialized Transfer Managers |
For example, the following illustrates the creation of a vmware
datastore using the shared transfer drivers.
<xterm>
cat ds.conf
NAME = production DS_MAD = vmware TM_MAD = shared
onedatastore create ds.conf
ID: 100
onedatastore listID NAME CLUSTER IMAGES TYPE TM 0 system none 0 fs shared 1 default none 3 fs shared100 production none 0 vmware shared
</xterm>
You can check more details of the datastore by issuing the onedatastore show
command.
Finally, you have to prepare the storage for the datastore and configure the hosts to access it. This depends on the transfer mechanism you have chosen for your datastore.
Drivers can be easily customized please refer to the specific guide for each datastore driver or to the Storage substystem developer's guide.
However you may find the files you need to modify here:
<DS_DRIVER>
<TM_DRIVER>