Contextualizing Virtual Machines 3.6
There are two contextualization mechanisms available in OpenNebula: the automatic IP assignment, and a more generic way to give any file and configuration parameters. You can use any of them individually, or both.
You can use already made packages that install context scripts and prepare udev configuration in your appliances. This is described in Contextualization Packages for VM Images section.
With OpenNebula you can derive the IP address assigned to the VM from the MAC address using the MAC_PREFFIX:IP rule. In order to achieve this we provide context scripts for Debian, Ubuntu, CentOS and openSUSE based systems. These scripts can be easily adapted for other distributions, check dev.opennebula.org.
To configure the Virtual Machine follow these steps:
$ONE_SRC_CODE_PATH/share/scripts/vmcontext.sh
into the /etc/init.d
directory in the VM root file system.<xterm> $ ln /etc/init.d/vmcontext.sh /etc/rc2.d/S01vmcontext.sh </xterm>
Having done so, whenever the VM boots it will execute this script, which in turn would scan the available network interfaces, extract their MAC addresses, make the MAC to IP conversion and construct a /etc/network/interfaces
that will ensure the correct IP assignment to the corresponding interface.
The method we provide to give configuration parameters to a newly started virtual machine is using an ISO image (OVF recommendation). This method is network agnostic so it can be used also to configure network interfaces. In the VM description file you can specify the contents of the iso file (files and directories), tell the device the ISO image will be accessible and specify the configuration parameters that will be written to a file for later use inside the virtual machine.
In this example we see a Virtual Machine with two associated disks. The Disk Image holds the filesystem where the Operating System will run from. The ISO image has the contextualization for that VM:
context.sh
: file that contains configuration variables, filled by OpenNebula with the parameters specified in the VM description fileinit.sh
: script called by VM at start that will configure specific services for this VM instancecertificates
: directory that contains certificates for some serviceservice.conf
: service configuration
context.sh
is included by default. You have to specify the values that will be written inside context.sh
and the files that will be included in the image.
FILES
attribute within CONTEXT
is only allowed to OpenNebula users within the oneadmin group.
In VM description file you can tell OpenNebula to create a contextualization image and to fill it with values using CONTEXT
parameter. For example:
CONTEXT = [ hostname = "MAINHOST", ip_private = "$NIC[IP, NETWORK=\"public net\"]", dns = "$NETWORK[DNS, NETWORK_ID=0]", root_pass = "$IMAGE[ROOT_PASS, IMAGE_ID=3]", ip_gen = "10.0.0.$VMID", files = "/service/init.sh /service/certificates.$UID /service/service.conf" ]
Variables inside CONTEXT section will be added to context.sh
file inside the contextualization image. These variables can be specified in three different ways:
hostname = "MAINHOST"
$<template_variable>
: any single value variable of the VM template, like for example:\\ip_gen = "10.0.0.$VMID"
$<template_variable>[<attribute>]
: Any single value contained in a multiple value variable in the VM template, like for example:ip_private = $NIC[IP]
$<template_variable>[<attribute>, <attribute2>=<value2>]
: Any single value contained in a multiple value variable in the VM template, setting one attribute to discern between multiple variables called the same way, like for example:ip_public = "$NIC[IP, NETWORK=\"Public\"]"
. You can use any of the attributes defined in the variable, NIC in the previous example.
$NETWORK[<vnet_attribute>, <NETWORK_ID|NETWORK>=<vnet_id|vnet_name>]
: Any single value variable in the Virtual Network template, like for example:dns = "$NETWORK[DNS, NETWORK_ID=3]"
Note that the network MUST be in used by any of the NICs defined in the template. The vnet_attribute can be TEMPLATE
to include the whole vnet template in XML (base64 encoded).
$IMAGE[<image_attribute>, <IMAGE_ID|IMAGE>=<img_id|img_name>]
: Any single value variable in the Image template, like for example:root = "$IMAGE[ROOT_PASS, IMAGE_ID=0]"
Note that the image MUST be in used by any of the DISKs defined in the template. The image_attribute can be TEMPLATE
to include the whole image template in XML (base64 encoded).
$USER[<user_attribute>]
: Any single value variable in the user (owner of the VM) template, like for example:ssh_key = "$USER[SSH_KEY]"
The user_attribute can be TEMPLATE
to include the whole user template in XML (base64 encoded).
$UID
, the uid of the VM owner$TEMPLATE
, the whole template in XML format and encoded in base64The file generated will be something like this:
# Context variables generated by OpenNebula hostname="MAINHOST" ip_private="192.168.0.5" dns="192.168.4.9" ip_gen="10.0.0.85" files="/service/init.sh /service/certificates.5 /service/service.conf" target="sdb" root="13.0"
Some of the variables have special meanings, but none of them are mandatory:
Attribute | Description |
---|---|
files | Files and directories that will be included in the contextualization image |
target | device where the contextualization image will be available to the VM instance. Please note that the proper device mapping may depend on the guest OS, e.g. ubuntu VMs should use hd* as the target device |
The VM should be prepared to use the contextualization image. First of all it needs to mount the contextualization image somewhere at boot time. Also a script that executes after boot will be useful to make use of the information provided.
The file context.sh
is compatible with bash
syntax so you can easilly source it inside a shellscript to get the variables that it contains.
Contextualization packages are available to several distributions so you can prepare them to work with OpenNebula without much effort. These are the changes they do to your VM:
There are two packages available:
After the installation of these packages the images on start will configure the network using the mac address generated by OpenNebula. They will also try to mount the cdrom context image from /dev/cdrom
and if init.sh
is found it will be executed.
The purpose of this section is to demonstrate how to quickly deploy a VM with OpenNebula in a few easy steps. We will assume that you have properly configured OpenNebula and that you have at least one worker node running KVM (this guide does not work with Xen for the moment).
We have prepared and contextualized a VM which is available for download here. The VM runs http://ttylinux.net/.
For this example we are going to use the simplest possible network configuration. Create a new file based on the following template and change the LEASES entries to available IPs from your network.
You should also change the BRIDGE entry if you Hypervisor is configured to use a different bridge.
$ cat small_network.net NAME = "small_network" TYPE = FIXED BRIDGE = br0 LEASES = [ IP="192.168.0.5"] LEASES = [ IP="192.168.0.6"] LEASES = [ IP="192.168.0.7"]
Once the file is prepared we can create the network:
<xterm> $ onevnet create small_network.net </xterm>
Create a new file based on the following template
$ cat marketplace_image.one NAME = "ttylinux" PATH = "http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000003/download" TYPE = OS
Once the file is prepared we can create the image:
<xterm> $ oneimage create marketplace_image.one –datastore default </xterm>
You can also use the Marketplace tab in Sunstone to import the image.
Create a new file based on the following template
$ cat ttylinux.one NAME = ttylinux CPU = 0.1 MEMORY = 64 DISK = [ IMAGE = "ttylinux" ] NIC = [ NETWORK = "small_network" ] FEATURES = [ acpi="no" ]
We are ready to deploy the VM. To do so simply do: <xterm> $ onevm create ttylinux.one </xterm>
It will take a minute or so to copy the image to /var/lib/one and to boot up the system. In the meantime we can figure out what IP the VM will have so that we can ssh into it.
<xterm> $ onevm show ttylinux|grep IP
IP=192.168.1.6,
</xterm>
By now, the VM should be up and running: <xterm> $ onevm list
ID USER NAME STAT CPU MEM HOSTNAME TIME 3 oneadmin myttyser runn 0 65536 localhost 00 00:06:49
</xterm>
Note: If the STAT attribute is not runn you should read the logs to see why it did not boot. You can find these logs in /var/log/one/<id>.log (vm specific log) and /var/log/one/oned.log.
We can ssh into the VM. The user is root and the password is password: <xterm> $ ssh root@192.168.1.6 Warning: Permanently added '192.168.1.6' (RSA) to the list of known hosts. root@192.168.1.6's password:
Chop wood, carry water.
# </xterm>
You might have been wondering how did the VM get automatically configured with an IP from the pool of IPs defined by the ONE Network associated to the VM template. Basically, we developed a script that runs during the bootup procedure which configures the IP address based on the MAC address of the VM. This is more thoroughly explained here.
We have not yet used the CONTEXT feature of OpenNebula which not only provides a simple way to configure the IP of the VM, but which also allows us to configure users, public keys, the host name, and any other thing we might think of. You can read a more detailed explanation on how to contextualize here.
Create a new file with the following content in /var/tmp
$ cat /var/tmp/init.sh #!/bin/bash if [ -f /mnt/context/context.sh ] then . /mnt/context/context.sh fi if [ -n "$HOSTNAME" ]; then echo $HOSTNAME > /etc/HOSTNAME hostname $HOSTNAME fi if [ -n "$IP_PUBLIC" ]; then ifconfig eth0 $IP_PUBLIC fi if [ -n "$NETMASK" ]; then ifconfig eth0 netmask $NETMASK fi if [ -f /mnt/context/$ROOT_PUBKEY ]; then cat /mnt/context/$ROOT_PUBKEY >> /root/.ssh/authorized_keys fi if [ -n "$USERNAME" ]; then adduser -s /bin/bash -D $USERNAME if [ -f /mnt/context/$USER_PUBKEY ]; then mkdir -p /home/$USERNAME/.ssh/ cat /mnt/context/$USER_PUBKEY >> /home/$USERNAME/.ssh/authorized_keys chown -R $USERNAME /home/$USERNAME/.ssh chmod -R 600 /home/$USERNAME/.ssh fi fi
Copy your public key to a tmp directory (e.g. /var/tmp/, $HOME/public) and a context section to the ttylinux.one template.
$ cat ttylinux.one NAME = ttylinux CPU = 0.1 MEMORY = 64 DISK = [ IMAGE = "ttylinux" ] NIC = [ NETWORK = "small_network" ] FEATURES = [ acpi="no" ] CONTEXT = [ hostname = "$NAME", ip_public = "PUBLIC_IP", files = "/var/tmp/init.sh /var/tmp/id_dsa.pub", target = "hdc", root_pubkey = "id_dsa.pub", username = "opennebula", user_pubkey = "id_dsa.pub" ]
Now we can ssh to the VM without entering a password, since the id_dsa.pub has been copied to the authorized_keys of both root and the username account you have define in the template.
<xterm> $ ssh opennebula@192.168.0.7
Chop wood, carry water.
$ </xterm>