Managing Virtual Machines 2.0
OpenNebula is able to assign, deploy, monitor and control VMs. This guide explains how to describe the wanted-to-be-ran Virtual Machine, and how users typically interact with the system.
A Virtual Machine within the OpenNebula system consists of:
The above items, plus some additional VM attributes like the OS kernel and context information to be used inside the VM, are specified in a VM template file.
Each VM in OpenNebula is identified by an unique number, the <VID>
. Also, the user can assign it a name in the VM template, the default name for each VM is one-<VID>
.
The life-cycle of a virtual machine within the system includes the following stages (note that this is a simplified version):
pend
): By default a VM starts in the pending state, waiting for a resource to run on.hold
): The owner has held the VM and it will not be scheduled until it is released. prol
): The system is transferring the VM files (disk images and the recovery file).runn
): The VM is running (note that this stage includes booting and shutting down phases). In this state, the virtualization driver will periodically monitor it.migr
): The VM is migrating from one resource to another. This can be a life migration or cold migration (the VM is saved and VM files are transferred to the new resource).epil
): In this phase the system cleans up the cluster node used to host the VM, and additionally disk images to be saved are copied back to the cluster front-end.stop
): The VM is stopped. VM state has been saved and it has been transferred back along with the disk images.susp
): Same as stopped, but the files are left in the remote resource to later restart the VM there (i.e. there is no need to re-schedule the VM).fail
): The VM failed.unknown
): The VM couldn't be reached, it is in an unknown state. done
): The VM is done. VMs in this state won't be shown with “onevm list” but are kept in the database for accounting purposes.OpenNebula templates are designed to be hypervisor-agnostic, but still there are some peculiarities to be taken into account, and mandatory attributes change depending on the target hypervisor. Hypervisor specific information for this attributes can be found in the drivers configuration guides:
OpenNebula has been designed to be easily extended, so any attribute name can be defined for later use in any OpenNebula module. There are some pre-defined attributes, though.
Please check the Virtual Machine definition file reference for details on all the sections.
For example, the following template defines a VM with 512MB of memory and one CPU. The VM has three disks:
Only one NIC is defined, attached to a virtual network.
The context section will generate a CDROM with the files specified, that will be run at startup. Please read the contextualization guide if you want to learn more about this feature.
In this case, between all the suitable hosts (those that meet CPU and MEMORY requirements, and also CPUSPEED > 1000 requirement), OpenNebula will pick the host with more free CPU.
#--------------------------------------- # VM definition example #--------------------------------------- NAME = vm-example CPU = 1 MEMORY = 512 # --- kernel & boot device --- OS = [ kernel = "/vmlinuz", initrd = "/initrd.img", root = "sda" ] # --- 3 disks --- DISK = [ image = "Debian 5.0" ] DISK = [ image = "Testing results" ] DISK = [ type = swap, size = 1024, readonly = "no" ] # --- 1 NIC --- NIC = [ network = "Private lab 3 LAN" ] # --- Placement options --- REQUIREMENTS = "CPUSPEED > 1000" RANK = FREECPU # --- Contextualization --- CONTEXT = [ files = "/service/init.sh /service/certificates /service/service.conf" ]
The disks will be mounted as sda for the OS, sdb for the generated context CDROM, sdd for the swap disk, and sde for the “Testing results”. This assignment schema is fully documented in the virtual machine template reference.
DISK
and NIC
attributes as you need
OpenNebula comes with a rich command line interface intended for users fond of consoles. A complete reference for these commands can be found here.
The following sections show the basics of the onevm
command with simple usage examples.
This command enables virtual machine management. Actions offered are:
create
( a VM in OpenNebula's VM pool ) deploy
(on a particular cluster node) shutdown
livemigrate
(the virtual machine is transferred between cluster nodes with no noticeable downtime)migration
(machine gets stopped and resumed elsewhere)hold
release
(from hold state)stop
(virtual machine state is transferred back to OpenNebula for a possible reschedule)cancel
suspend
(virtual machine state is left in the cluster node for resuming)resume
delete
restart
(resubmits the virtual machine after failure)list
(outputs all the available VMs)show
(outputs information for a specific VM)top
(lists VMs continously) history
(gets VMs history of execution on the cluster nodes)Assuming we have a VM template called myVM.one describing a virtual machine. Then, we can allocate the VM in OpenNebula issuing a:
<xterm> $ onevm create myVM.one </xterm>
afterwards, the VM can be listed with the list
option:
<xterm> $ onevm list
ID USER NAME STAT CPU MEM HOSTNAME TIME 0 oneadmin vm-examp runn 0 2097152 ursa 00 00:04:21
</xterm>
and details about it can be obtained with show
:
<xterm> $ onevm show 0 VIRTUAL MACHINE 0 INFORMATION ID : 0 NAME : vm-example STATE : ACTIVE LCM_STATE : RUNNING START TIME : 07/15 15:48:42 END TIME : - DEPLOY ID: : one-0
VIRTUAL MACHINE TEMPLATE CONTEXT=[
FILES=/service/init.sh /service/certificates /service/service.conf ]
CPU=1 DISK=[
CLONE=YES, DISK_ID=0, IMAGE=Debian 5.0, IMAGE_ID=10, READONLY=NO, SAVE=NO, SOURCE=/home/cloud/opennebula/var/images/147f94ddb708851e71651f05caf81da0131cc904, TARGET=hda, TYPE=DISK ]
DISK=[
CLONE=YES, DISK_ID=1, IMAGE=Testing results IMAGE_ID=15, READONLY=NO, SAVE=NO, SOURCE=/home/cloud/opennebula/var/images/6478ab2b7f839538c6dc4d525ea0153387a91f1c, TARGET=hde, TYPE=DISK ]
DISK=[
READONLY=no, SIZE=1024, DISK_ID=2, TYPE=swap ]
MEMORY=512 NAME=vm-example NIC=[
BRIDGE=bpriv, IP=192.168.30.1, MAC=02:00:c0:a8:1e:01, NETWORK=Private lab 3 LAN, NETWORK_ID=2 ]
RANK=FREECPU REQUIREMENTS=CPUSPEED > 1000 VMID=0 </xterm>