Information Manager Drivers 4.2
The IM drivers are the ones responsible to output information about the host. It also gets information about the VMs running in a host to optimize VM monitoring.
This guide will teach you how to create proves for IM drivers. It is also a starting point on how to create a new IM driver from scratch and describes the meaning of the monitoring values.
To add a new driver to OpenNebula some lines should be added to /etc/one/oned.conf
to describe it:
IM_MAD = [ name = "<im name>", executable = "one_im_ssh", arguments = "-r 0 -t 15 <driver name>" ]
Usually the im name
and driver name
are the same. The im name
is the name we will use to refer in the cli to that driver, for example, to set this driver for a host. The driver name
will be used to refer to the directory in /var/lib/one/remotes/im
. For example, to refer the remotes directory kvm.d
the driver name
should be kvm
.
The executable one_im_ssh
is the driver server that executes the probes in the remote host, in case we want to execute the probes in the frontend we can substitute it by one_im_sh
.
These im server takes some parameters, these are the most common:
Parameter | Description |
---|---|
-r number | Number of retries |
-t number | Number of simultaneous monitoring actions |
An IM diver is composed of one or several scripts that write to stdout
information in this form:
KEY1="value1" KEY2="value2"
These scripts should be stored in a directory called <name of the driver>.d
located in /var/lib/one/remotes/im
. The scripts should be executable and can be written in any language but make sure that the hosts or the frontend have everything it needs to run them. For example, if the script is written in python you need the interpreter and all the libraries needed installed in the hosts.
The drivers receive three parameters in case they need them:
Position | Description |
---|---|
1 | hypervisor: the name of the hypervisor, that is, the name of the directory without .d |
2 | host id: identifier of the host in OpenNebula |
3 | host name: name of the host in OpenNebula |
Take into account that in shell script the parameters start at 1 ($1
) and in ruby start at 0 (ARGV[0]
). For shell script you can use this snippet to get the parameters:
hypervisor = $1 host_id = $2 host_name = $3
You can add any key and value you want to use later in RANK
and REQUIREMENTS
for scheduling but there are some basic values you should output:
Key | Description |
---|---|
HYPERVISOR | Name of the hypervisor of the host, useful for selecting the hosts with an specific technology. |
TOTALCPU | Number of CPUs multiplied by 100. For example, a 16 cores machine will have a value of 1600. |
CPUSPEED | Speed in Mhz of the CPUs. |
TOTALMEMORY | Maximum memory that could be used for VMs. It is advised to take out the memory used by the hypervisor. |
USEDMEMORY | Memory used, in kilobytes. |
FREEMEMORY | Available memory for VMs at that moment, in kilobytes. |
FREECPU | Percentage of idling CPU multiplied by the number of cores. For example, if 50% of the CPU is idling in a 4 core machine the value will be 200. |
USEDCPU | Percentage of used CPU multiplied by the number of cores. |
NETRX | Received bytes from the network |
NETTX | Transferred bytes to the network |
For example, a probe that gets memory information about a host could be something like:
#!/bin/bash total=$(free | awk ' /^Mem/ { print $2 }') used=$(free | awk '/buffers\/cache/ { print $3 }') free=$(free | awk '/buffers\/cache/ { print $4 }') echo "TOTALMEMORY=$total" echo "USEDMEMORY=$used" echo "FREEMEMORY=$free"
Executing it should give use memory values:
<xterm> $ ./memory_probe TOTALMEMORY=1020696 USEDMEMORY=209932 FREEMEMORY=810724 </xterm>
For real examples check the directories at /var/lib/one/remotes/im
.
The scripts should also provide information about the VMs running in the host. This is useful as it will only need one call to gather all that information about the VMs in each host. The output should be in this form:
VM_POLL=YES VM=[ ID=86, DEPLOY_ID=one-86, POLL="USEDMEMORY=918723 USEDCPU=23 NETTX=19283 NETRX=914 STATE=a" ] VM=[ ID=645, DEPLOY_ID=one-645, POLL="USEDMEMORY=563865 USEDCPU=74 NETTX=2039847 NETRX=2349923 STATE=a" ]
The first line (VM_POLL=YES
) is used to indicate OpenNebula that VM information will follow. Then the information about the VMs is output in that form.
Key | Description |
---|---|
ID | OpenNebula VM id. It can be -1 in case this VM was not created by OpenNebula |
DEPLOY_ID | Hypervisor name or identifier of the VM |
POLL | VM monitoring info, in the same format as VMM driver poll |
For example here is a simple script to get qemu/kvm VMs status from libvirt. As before, check the scripts from OpenNebula for a complete example:
#!/bin/bash echo "VM_POLL=YES" virsh -c qemu:///system list | grep one- | while read vm; do deploy_id=$(echo $vm | cut -d' ' -f 2) id=$(echo $deploy_id | cut -d- -f 2) status_str=$(echo $vm | cut -d' ' -f 3) if [ $status_str == "running" ]; then state="a" else state="e" fi echo "VM=[" echo " ID=$id," echo " DEPLOY_ID=$deploy_id," echo " POLL=\"STATE=$state\" ]" done
<xterm> $ ./vm_poll VM_POLL=YES VM=[
ID=0, DEPLOY_ID=one-0, POLL="STATE=a" ]
VM=[
ID=1, DEPLOY_ID=one-1, POLL="STATE=a" ]
</xterm>