Managing physical hosts and clusters 2.0
In order to use your existing physical nodes, you have to add them to the system as OpenNebula hosts. You need the following information:
im_kvm
.tm_nfs
.vmm_kvm
.
OpenNebula 2.0 introduces support to cluster physical hosts.
By default, all hosts belong to the “default” cluster. The OpenNebula administrator can create and delete clusters, and add and remove hosts from this clusters using the onecluster command.
Thanks to this feature, the administrator can logically group hosts by any attribute like the physical location (e.g. CLUSTER = production), or a given feature (e.g. CLUSTER = intel).
Users can require their virtual machines to be deployed in a host that meets certain constrains.
These constrains can be defined using any attribute reported by onehost show
like the architecture (ARCH), or the cluster the host is assigned to.
To take advance of this feature, use the requirements attribute of the virtual machine template. You may want to read first the guide for managing virtual machines.
Hosts can be added to the system anytime with the onehost
command. You can add the cluster nodes to be used by OpenNebula, like this:
<xterm> $ onehost create host01 im_kvm vmm_kvm tm_nfs $ onehost create host02 im_kvm vmm_kvm tm_nfs </xterm>
The status of the cluster can be checked with the onehost list
command:
<xterm> $ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM STAT 0 host01 default 2 100 90 90 523264 205824 on 1 host02 default 7 100 99 99 523264 301056 on 2 host03 default 0 100 99 99 523264 264192 off
</xterm>
And specific information about a host with show
:
<xterm> $ onehost show host01 HOST 0 INFORMATION ID : 0 NAME : host01 CLUSTER : default STATE : MONITORED IM_MAD : im_kvm VM_MAD : vmm_kvm TM_MAD : tm_nfs
HOST SHARES MAX MEM : 523264 USED MEM (REAL) : 317440 USED MEM (ALLOCATED) : 131072 MAX CPU : 100 USED CPU (REAL) : 10 USED CPU (ALLOCATED) : 20 RUNNING VMS : 2
MONITORING INFORMATION ARCH=i686 CPUSPEED=1995 FREECPU=90 FREEMEMORY=205824 HOSTNAME=host01 HYPERVISOR=xen MODELNAME=Intel(R) Xeon(R) CPU L5335 @ 2.00GHz NETRX=0 NETTX=0 TOTALCPU=100 TOTALMEMORY=523264 USEDCPU=10 USEDMEMORY=317440 </xterm>
If you want not to use a given host you can temporarily disable it:
<xterm> $ onehost disable host01 </xterm>
A disabled host should be listed with STAT off
by onehost list
. You can also remove a host permanently with:
<xterm> $ onehost delete host01 </xterm>
onehost
utility can be found in the Command Line Reference
If you use the onecluster list
command you will see that the “default” cluster is created automatically.
<xterm> $ onecluster list
ID NAME 0 default
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM STAT 0 ursa default 0 0 0 100 0 0 on 1 ursa01 default 0 0 0 100 0 0 on 2 ursa02 default 0 0 0 100 0 0 on 3 ursa03 default 0 0 0 100 0 0 on 4 ursa04 default 0 0 0 100 0 0 on
</xterm>
You may want to isolate your physical hosts running virtual machines containing important services for you business, from the virtual machines running a development version of your software. The OpenNebula administrator can do so with these commands:
<xterm> $ onecluster create testing $ onecluster create production
$ onecluster addhost ursa01 production $ onecluster addhost ursa03 testing $ onecluster addhost ursa04 testing
$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM STAT 0 ursa default 0 0 0 100 0 0 on 1 ursa01 production 0 0 0 100 0 0 on 2 ursa02 default 0 0 0 100 0 0 on 3 ursa03 testing 0 0 0 100 0 0 on 4 ursa04 testing 0 0 0 100 0 0 on
</xterm>
From this point, the newly created machines can use this cluster names as a placement requirement:
<xterm> REQUIREMENTS = “CLUSTER = \”testing\“” </xterm>
Once your development cycle is finished, this “testing” and “production” clusters may not be useful any more. Let's delete the testing cluster.
<xterm> $ onecluster delete testing $ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM STAT 0 ursa default 0 0 0 100 0 0 on 1 ursa01 production 0 0 0 100 0 0 on 2 ursa02 default 0 0 0 100 0 0 on 3 ursa03 default 0 0 0 100 0 0 on 4 ursa04 default 0 0 0 100 0 0 on
</xterm>
As you can see, the hosts assigned to the “testing” cluster have been moved to the “default” one.
onecluster
utility can be found in the Command Line Reference