Storage Subsystem 2.2

One key aspect of virtualization management is the process of dealing with Virtual Machines images. Allegedly, there are a number of possibly different configurations depending on the user needs. For example, the user may want all her images placed on a separate repository with only http access. Or images can be shared through NFS between all the hosts. OpenNebula aims to be flexible enough to support as many different image storage configurations as possible.

The image storage model upon which OpenNebula can organize the images uses the following concepts:

The Image Life-Cycle

Any given VM image to be used crosses through the next steps:

  • Preparation, implies all the necessary changes to be made to the machine's image so it is prepared to offer the service it is intended to. OpenNebula assumes that the image(s) that conform a particular VM are prepared and placed in the accessible image repository.
  • Cloning the image means taking the image from the repository and placing it in the VM's directory before the VM is actually booted. If a VM image is to be cloned, means that the original image is not going to be used, and thus a copy will. There is a qualifier (clone) for the images that can mark them as to be cloned or not.
  • Save / Remove, once the VM has been shutdown the images, and all the changes thereof, are to be disposed of. However, if the save qualifier is activated the image will be saved for later use under $ONE_LOCATION/var/<VID>/images.

Note: If OpenNebula was installed in system wide mode this directory becomes /var/lib/one/images. The rest of this guide refers to the $ONE_LOCATION paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here

Physical Cluster Configuration

The storage model assumed by OpenNebula does not require any special software to be installed. The following are two cluster configuration examples supported by OpenNebula out-of-the-box. They represent the choices of either sharing the <VM_DIR> among all the cluster nodes and the cluster front-end via NFS, or not sharing any folder and have the machines accessible using SSH. Please note that the Transfer Manager was built using a modular architecture, where each action is associated with a small script that can be easily tuned to fit your cluster configuration. A third choice (share the image repository and not the <VM_DIR>) is explained in the Customizing & Extending section.

Shared - NFS

This arrangement of the Storage Model assumes that the <VM_DIR> is shared between all the cluster nodes and the OpenNebula server. In this case, the semantics of the clone and save actions described above are:

  • Cloning: If an image is clonable, it will be copied from the image repository to <VM_DIR>/<VID>/images, form where it will be used for the VM. If not, a symbolic link will be created from the image repository also to <VM_DIR>/<VID>/images, so effectively the VM is going to use the original image.
  • Saving: This will have only effect if the image is not clonable, if it is then saving comes for free. Therefore, if the image is not clonable and savable, the image will be moved from <VM_DIR>/<VID>/images to $ONE_LOCATION/var/<VID>/images.

Please note that by default <VM_DIR> is set to $ONE_LOCATION/var.

 Storage Model : NFS

Non-Shared - SSH

In this scenario, the <VM_DIR> is not shared between the cluster front-end and the nodes. Note, that <VM_DIR> can be shared between the cluster nodes to perform live migrations. The semantics of clone and save are:

  • Cloning: This attribute is ignored in this configuration, since images will always be cloned from the image repository to the <VM_DIR>/<VID>/images.
  • Saving: If enabled, the image will be transferred back from <VM_DIR>/<VID>/images to $ONE_LOCATION/var/<VID>/images/. If not enabled, the image will be simply erased. It is therefore the users responsability to reuse the image from $ONE_LOCATION/var/<VID>/images/ in subsequent uses of the VM in order to use any configuration done or data stored in it.

Storage Model : SSH

LVM

There are many possible scenarios in which we can take advantage of OpenNebula's support of block devices, especially if we use LVM. The most powerful advantage of using LVM is snapshotting, which results in an immediate creation of disk devices.

OpenNebula ships with a set of Transfer Manager scripts which support LVM. The idea behind these scripts is not to provide a full-blown LVM solution, but a basic example which can be tailored to fit a more specific scenario.

The Transfer Manager makes the assumption that the block devices defined in the VM template are available in all the nodes i.e. if we have

source   = "/dev/default/ttylinux",

then /dev/default/ttylinux must exist in the node where the VM will be deployed. This can be achieved either by creating the device by hand or by using more complicated techniques like exporting a LVM to a cluster.

  • Cloning: A new snapshot will be created and assigned to the VM. This process is almost instantaneous.
  • Saving: Saving disk images is supported by dumping the device to a file and scp'ing the disk image back to the frontend.
  • Stop/Migration: These features have not been implemented for this Transfer Manager, since they depend strongly on the scenario.

Configuration Interface

The Transfer Manager is configured in the $ONE_LOCATION/etc/oned.conf file, see the Daemon Configuration file. Being flexible, the TM is always the same program, and different configurations are achieved by changing the configuration file. This file regulates the assignment between actions, like CLONE or LN, and scripts, effectively changing the semantics of the actions understood by the TM.

TM_MAD = [
    name       = "tm_nfs",
    executable = "one_tm",
    arguments  = "<tm-configuration-file>",
    default    = "<default-tm-configuration-file" ]

Current OpenNebula release contains two set of scripts for the two scenarios described above, Shared - NFS ($ONE_LOCATION/etc/tm_nfs/tm_nfs.conf) or the Non Shared SSH TM ($ONE_LOCATION/etc/tm_ssh/tm_ssh.conf). Each different TM will have their own directory inside $ONE_LOCATION/etc.

Lets see a sample line from the Shared - NFS configuration file:

...
CLONE   = nfs/tm_clone.sh
...

Basically, the TM here is being told that whenever it receives a clone action it should call the tm_clone.sh script with the received parameters. For more information on modifying and extending these scripts see Customizing and Extending.

Note: Remember that if OpenNebula is installed in root, the configuration files are placed in /etc/one.

Example Shared - NFS

To configure OpenNebula to be able to handle images with this arrangement of the storage model, add the following in $ONE_LOCATION/etc/oned.conf, so the TM knows what set of scripts it needs to use:

TM_MAD = [
    name       = "tm_nfs",
    executable = "one_tm",
    arguments  = "tm_nfs/tm_nfs.conf",
    default    = "tm_nfs/tm_nfs.conf" ]

Example Non-shared - SSH

To configure OpenNebula to be able to handle images with non-shared arrangement of the storage model, add the following in $ONE_LOCATION/etc/oned.conf, so the TM knows what set of scripts it needs to use:

TM_MAD = [
    name       = "tm_ssh",
    executable = "one_tm",
    arguments  = "tm_ssh/tm_ssh.conf",
    default    = "tm_ssh/tm_ssh.conf" ]

Example LVM

To configure OpenNebula to be able to handle images with the LVM storage model, add the following in $ONE_LOCATION/etc/oned.conf, so the TM knows what set of scripts it needs to use:

TM_MAD = [
    name       = "tm_lvm",
    executable = "one_tm",
    arguments  = "tm_lvm/tm_lvm.conf"]