Storage Subsystem 2.2
One key aspect of virtualization management is the process of dealing with Virtual Machines images. Allegedly, there are a number of possibly different configurations depending on the user needs. For example, the user may want all her images placed on a separate repository with only http access. Or images can be shared through NFS between all the hosts. OpenNebula aims to be flexible enough to support as many different image storage configurations as possible.
The image storage model upon which OpenNebula can organize the images uses the following concepts:
<VM_DIR>/<VID>
, <VM_DIR>
path is defined in the oned.conf file for the cluster. Deployment files for the hypervisor to boot the machine, checkpoints and images being used or saved, all of them specific to that VM will be placed into this directory. Note that the <VM_DIR>
should be shared for most hypervisors to be able to perform live migrations.Any given VM image to be used crosses through the next steps:
clone
) for the images that can mark them as to be cloned or not.save
qualifier is activated the image will be saved for later use under $ONE_LOCATION/var/<VID>/images
.
Note: If OpenNebula was installed in system wide mode this directory becomes /var/lib/one/images
. The rest of this guide refers to the $ONE_LOCATION
paths (corresponding to self contained mode) and omits the equivalent system wide locations. More information on installation modes can be found here
The storage model assumed by OpenNebula does not require any special software to be installed. The following are two cluster configuration examples supported by OpenNebula out-of-the-box. They represent the choices of either sharing the <VM_DIR>
among all the cluster nodes and the cluster front-end via NFS
, or not sharing any folder and have the machines accessible using SSH
. Please note that the Transfer Manager was built using a modular architecture, where each action is associated with a small script that can be easily tuned to fit your cluster configuration. A third choice (share the image repository and not the <VM_DIR>
) is explained in the Customizing & Extending section.
This arrangement of the Storage Model assumes that the <VM_DIR>
is shared between all the cluster nodes and the OpenNebula server. In this case, the semantics of the clone and save actions described above are:
Cloning
: If an image is clonable, it will be copied from the image repository to <VM_DIR>/<VID>/images
, form where it will be used for the VM. If not, a symbolic link will be created from the image repository also to <VM_DIR>/<VID>/images
, so effectively the VM is going to use the original image.Saving
: This will have only effect if the image is not clonable, if it is then saving comes for free. Therefore, if the image is not clonable and savable, the image will be moved from <VM_DIR>/<VID>/images
to $ONE_LOCATION/var/<VID>/images
.
Please note that by default <VM_DIR>
is set to $ONE_LOCATION/var
.
In this scenario, the <VM_DIR>
is not shared between the cluster front-end and the nodes. Note, that <VM_DIR>
can be shared between the cluster nodes to perform live migrations. The semantics of clone and save are:
Cloning
: This attribute is ignored in this configuration, since images will always be cloned from the image repository to the <VM_DIR>/<VID>/images
.Saving
: If enabled, the image will be transferred back from <VM_DIR>/<VID>/images
to $ONE_LOCATION/var/<VID>/images/
. If not enabled, the image will be simply erased. It is therefore the users responsability to reuse the image from $ONE_LOCATION/var/<VID>/images/
in subsequent uses of the VM in order to use any configuration done or data stored in it.There are many possible scenarios in which we can take advantage of OpenNebula's support of block devices, especially if we use LVM. The most powerful advantage of using LVM is snapshotting, which results in an immediate creation of disk devices.
OpenNebula ships with a set of Transfer Manager scripts which support LVM. The idea behind these scripts is not to provide a full-blown LVM solution, but a basic example which can be tailored to fit a more specific scenario.
The Transfer Manager makes the assumption that the block devices defined in the VM template are available in all the nodes i.e. if we have
source = "/dev/default/ttylinux",
then /dev/default/ttylinux must exist in the node where the VM will be deployed. This can be achieved either by creating the device by hand or by using more complicated techniques like exporting a LVM to a cluster.
Cloning
: A new snapshot will be created and assigned to the VM. This process is almost instantaneous.Saving
: Saving disk images is supported by dumping the device to a file and scp'ing the disk image back to the frontend.Stop/Migration
: These features have not been implemented for this Transfer Manager, since they depend strongly on the scenario.
The Transfer Manager is configured in the $ONE_LOCATION/etc/oned.conf
file, see the Daemon Configuration file. Being flexible, the TM is always the same program, and different configurations are achieved by changing the configuration file. This file regulates the assignment between actions, like CLONE
or LN
, and scripts, effectively changing the semantics of the actions understood by the TM.
TM_MAD = [ name = "tm_nfs", executable = "one_tm", arguments = "<tm-configuration-file>", default = "<default-tm-configuration-file" ]
Current OpenNebula release contains two set of scripts for the two scenarios described above, Shared - NFS
($ONE_LOCATION/etc/tm_nfs/tm_nfs.conf
) or the Non Shared SSH
TM ($ONE_LOCATION/etc/tm_ssh/tm_ssh.conf
). Each different TM will have their own directory inside $ONE_LOCATION/etc
.
Lets see a sample line from the Shared - NFS
configuration file:
... CLONE = nfs/tm_clone.sh ...
Basically, the TM here is being told that whenever it receives a clone action it should call the tm_clone.sh
script with the received parameters. For more information on modifying and extending these scripts see Customizing and Extending.
Note: Remember that if OpenNebula is installed in root, the configuration files are placed in /etc/one
.
To configure OpenNebula to be able to handle images with this arrangement of the storage model, add the following in $ONE_LOCATION/etc/oned.conf
, so the TM knows what set of scripts it needs to use:
TM_MAD = [ name = "tm_nfs", executable = "one_tm", arguments = "tm_nfs/tm_nfs.conf", default = "tm_nfs/tm_nfs.conf" ]
To configure OpenNebula to be able to handle images with non-shared arrangement of the storage model, add the following in $ONE_LOCATION/etc/oned.conf
, so the TM knows what set of scripts it needs to use:
TM_MAD = [ name = "tm_ssh", executable = "one_tm", arguments = "tm_ssh/tm_ssh.conf", default = "tm_ssh/tm_ssh.conf" ]
To configure OpenNebula to be able to handle images with the LVM storage model, add the following in $ONE_LOCATION/etc/oned.conf
, so the TM knows what set of scripts it needs to use:
TM_MAD = [ name = "tm_lvm", executable = "one_tm", arguments = "tm_lvm/tm_lvm.conf"]