Shared File System 3.0

The shared storage model requires the front-end and hosts to share the VM directories (in VM_DIR) and the Image Repository. Both locations have to be mounted in the same points in the front-end and hosts. Typically these storage areas are shared using a distributed FS like NFS, GlusterFS, Lustre, etc. In this guide we'll configure NFS, but you can use any other alternatives.

 Storage Model : SHARED

inlinetoc

Requirements

The default <VM_DIR> path is /var/lib/one, it is where the TM (Transfer Manager) driver places the cloned Image files for the running VMs. The master images are stored in the Image Repository, which by default is in /var/lib/one/images.

For this setup, it is required that the front-end and the hosts have <VM_DIR> and the Image Repository shared and mounted in the same path. Hence, with the default values you only need to share the front-end directory /var/lib/one.

As an example, let us consider NFS. Be sure that the NFS daemons are running in the front-end (to export /var/lib/one) and in the hosts. For example, if you have all your hosts in a local network with address 192.168.0.0/24 you will need to add to your /etc/exports file a line like this:

/var/lib/one 192.168.0.0/255.255.255.0(rw)

:!: You may be interested in the ecosystem component TM MooseFS.

Considerations & Limitations

A shared storage reduces VM deployment times and enables live-migration, but it can also become a bottleneck in your infrastructure and degrade your VMs performance if the virtualized services perform disk-intensive workloads. Usually this limitation may be overcome by caching locally the VM images at the hosts or tuning/improving the FS.

By default, OpenNebula uses SQLite as the DB backend. Due to filesystem locking problems you will have DB malfunction if the front-end is an NFS client. So if SQLite is used, it is required that the front-end machine acts as the NFS server. See the MySQL configuration guide to change the DB backend.

Hypervisor

Usually most hypervisors require a shared storage to perform live-migrations. Please refer to the hypervisor documentation to find out the requirements in your case.

Persistent Images

If the VM uses a persistent image, a symbolic link to the Image Repository is created in <VM_DIR>/<VID>/images, instead of a copy. This allows an immediate deployment, and no extra time is needed to save the disk back to the repository when the VM is shut down.

On the other hand, the master Image file is used directly, and if for some reason the VM fails and Image data is corrupted or lost, there is no way to cancel the persistence.

Non-Persistent Images

Non-persistent images are copied from the Image Repository to <VM_DIR>/<VID>/images. Take this into account if you are going to dedicate different storage for the Image Repository and the <VM_DIR>, as this will increase the VM deployment time.

Save Images

Images created using the 'onevm saveas' command will be moved from <VM_DIR>/<VID>/images to the Image Repository only after the VM is successfully shut down. This means that the VM has to be shutdown using the 'onevm shutdown' command, and not 'onevm delete'. Suspending or stopping a running VM won't copy the disk file to the repository either.

Check the VM life-cycle diagram for more information.

File System Permissions

Hypervisors and its utilities run as root in the physical host and must be able to access the images exported. By default NFS does not let root clients to have superuser rights, they are mapped as 'nobody' user. To overcome this you can:

  • Set the permissions in the directory and the images exported to very permissive (directories: a+rx, files: a+rw)
  • Disable root squashing adding no_root_squash to nfs exporting options

Configuring OpenNebula

The Shared File System TM driver is named “tm_shared”, it will work with any posix-compliant distributed file system. It is enabled by default in oned.conf:

TM_MAD = [
    name       = "tm_shared",
    executable = "one_tm",
    arguments  = "tm_shared/tm_shared.conf" ]

Tunning & Extending

The 'tm_shared' driver files are installed in /usr/lib/one/tm_commands/shared. There is a script file for each action, that can be easily customized:

tm_clone.sh
tm_context.sh
tm_delete.sh
tm_ln.sh
tm_mkimage.sh
tm_mkswap.sh
tm_mv.sh

Follow the Transfer Manager Driver guide to learn how to tune and extend them.