The System Datastore 4.0

Overview

The system datastore holds images for running VMs. There is a default system datastore in an OpenNebula installation and it cannot be used to manually register images. OpenNebula automatically copies, moves and deletes images from it as the VMs need them.

The system datastore has always ID 0, and it is located in /var/lib/one/datastores/0. For each running or stopped VM there is a directory /var/lib/one/datastores/0/<vm_id> that stores the VM images as disk.0, disk.1… For example the structure of the system datastore with 3 VMs (VM 0 and 2 running, and VM 7 stopped) is: <xterm> datastores

</xterm>

inlinetoc

Configuration

There are two steps to configure the system datastore:

  • First you need to prepare the storage area for the system datastore, and make it accessible through /var/lib/one/datastores/0, e.g. mount the storage device at that location or link it there. The actual size needed for the system datastore depends on the transfer mechanism, see below.
  • You need to choose how the images of the running VMs are accessed in the hosts. The system datastore can use the shared and ssh transfer drivers.

:!: Check that SSH is configured to enable oneadmin passwordless access in every host

Using the Shared Transfer Driver

By default the system datastore is configured to use this driver, so you do not need to modify it. The shared transfer driver requires the hosts to share the system datastore directory (not front-end). Typically these storage areas are shared using a distributed FS like NFS, GlusterFS, Lustre, etc.

A shared system datastore usually reduces VM deployment times and enables live-migration, but it can also become a bottleneck in your infrastructure and degrade your VMs performance if the virtualized services perform disk-intensive workloads. Usually this limitation may be overcome by:

  • Using different filesystem servers for the images datastores, so the actual I/O bandwith is balanced
  • Caching locally the VM images at the hosts, using the ssh transfer driver
  • Tuning or improving the filesystem servers

Host Configuration

Each host has to mount the system datastore under $DATASTORE_LOCATION/0. The DATASTORE_LOCATION value can be defined per cluster, as explained in the cluster guide. The default value for all clusters can be defined in oned.conf (if no value is defined there it falls back to /var/lib/one/datastores)

In small installations the front-end can be also used to export the system datastore directory to the hosts. Although this deployment is not recommended for medium-large size deployments.

:!: It is not needed to mount the system datastore in the OpenNebula front-end as /var/lib/one/datastores/0

:!: DATASTORE_LOCATION defines the path to access the datastores in the hosts. It can be defined for each cluster, or if not defined for the cluster the default in oned.conf will be used.

:!: When needed, the front-end will access the datastores at /var/lib/one/datastores, this path cannot be changed, you can link each datastore directory to a suitable location

Using the SSH Transfer Driver

In this case the system datastore is distributed among the hosts. The ssh transfer driver uses the hosts' local storage to place the images of running VMs (as opposed to a shared FS in the shared driver). All the operations are then performed locally but images have to be copied always to the hosts, which in turn can be a very resource demanding operation. Also this driver prevents the use of live-migrations between hosts.

To use this driver, you need to update the system datastore: <xterm>

onedatastore update 0

#Edit the file to read as: TM_MAD=ssh </xterm>

Host Configuration

There is no special configuration needed to take place to use the ssh drivers for the system datastore. Just be sure that there is enough space under $DATASTORE_LOCATION/0 to hold the images of the VMs that will run in each particular host.

Also be sure that there is space in the frontend under /var/lib/one/datastores/0 to hold the images of the stopped VMs.

:!: DATASTORE_LOCATION defines the path to access the datastores in the hosts. It can be defined for each cluster, or if not defined for the cluster the default in oned.conf will be used.

:!: When needed, the front-end will access the datastores at /var/lib/one/datastores, this path cannot be changed, you can link each datastore directory to a suitable location

The System Datastore for Multi-cluster setups

When hosts are grouped in a cluster, a different system datastore can be set for each one. Consider these two scenarios:

  • you want to use two different NFS servers for different clusters and so balance I/O requests among them
  • some hosts use a shared directory, while others will use the ssh transfer driver. You can configure the system datastore to use the shared TM driver, and then create a new system datastore with a template similar to this one:

<xterm> $ cat system.ds NAME = ssh_ds TM_MAD = ssh TYPE = SYSTEM_DS

$ onedatastore create system.ds ID: 100 </xterm>

To associate this system datastore to the cluster, add it. You will see that the cluster's “SYSTEM DATASTORE” attribute will be updated.

<xterm> $ onecluster adddatastore ssh_cluster ssh_ds

$ onecluster show ssh_ds CLUSTER 100 INFORMATION ID : 100 NAME : ssh_ds SYSTEM DATASTORE : 100 </xterm>

You can also set the DATASTORE_LOCATION for the hosts of a cluster using theDATASTORE_LOCATION attribute. It can be changed with the onecluster update command.

<xterm> $ onecluster update ssh_cluster #Edit the file to read as: DATASTORE_LOCATION=/path/to/datastores/ </xterm>

Tuning and Extending

Drivers can be easily customized please refer to the specific guide for each datastore driver or to the Storage substystem developer's guide.

However you may find the files you need to modify here:

  • /var/lib/one/remotes/datastore/<DS_DRIVER>
  • /var/lib/one/remotes/tm/<TM_DRIVER>