Configuring OpenNebula Virtual Networks

Since release 1.2 OpenNebula has the ability to automatically assign MAC and IP addresses to new deployed Virtual Machines. What is needed now is to make running VM's aware of its network configuration. One way to do this is to use DHCP (described in Networking with DHCP Howto) or use the new approach that consists on using last 4 bytes of the MAC address to specify the IP assigned to the interface.

Guest operating system must be aware of this way of network configuration and it should be modified to run a script that knows how to extract this information from mac addresses and configure the network. Here we will show you how to use this network configuration approach for a Debian/Ubuntu distribution. Since network configuration files are not equal in all distributions you may need to modify the script if you want to use other linux flavor but the basis is here and should be not very different.

There are a couple of assumptions made by the scripts in this howto (that you can easily modified to suit your needs):

OpenNebula Network Configuration

As an example we are going to create two networks, one public an other private. To do this we need to create two network configuration files specifying the IP's or the network it will belong.

public.net

NAME   = "public"
TYPE   = FIXED
BRIDGE = eth1
 
LEASES= [IP=147.96.80.185]

In this configuration we are defining a network called public that contains fixed IP's. In this case it only contains one IP (LEASES line) but it can contain any number of IP's. We are also specifying that this network will use bridge eth1. It is not the network it will configure in the VM but the bridge it will use. In our cluster configuration we have eth1 bridge connected to a switch that is connected to the internet. You will have to change this to suit your needs.

private.net

NAME           = "private"
TYPE           = RANGED
BRIDGE         = eth0
NETWORK_SIZE   = 254
NETWORK_ADDRESS= 192.168.13.0

This other configuration file describes a ranged network, that is, an entire C class. Here we also set the bridge it will use, notice it is different from the public one. We also provide the network address so all the IP's assigned will be in te range of 192.168.13.1 - 192.168.13.255.

Now that we have the files we can let OpenNebula know about these two new networks. The command to deal with Virtual Networks is called onevnet. So to create this two networks we have to issue onevnet with create command:

$ onevnet create public.net
$ onevnet create private.net

After executing this OpenNebula is aware of the networks, we can check this using this commands:

$ onevnet list
 NID NAME            TYPE   SIZE BRIDGE
   0 public             1        eth1  
   1 private            0        eth0 

$ onevnet show public
NID               : 0
UID               : 0
Network Name      : public
Type              : Fixed
Size              : 1
Bridge            : eth1

....: Template :....
	BRIDGE=eth1
	LEASES=IP=147.96.80.185
	NAME=public
	TYPE=FIXED

....: Leases :....
IP = 147.96.80.185  MAC = 00:02:93:60:50:b9  USED = 0 VID = -1

$ onevnet show private
NID               : 0
UID               : 0
Network Name      : private
Type              : Ranged
Size              : 256
Bridge            : eth0

....: Template :....
	BRIDGE=eth0
	NAME=private
	NETWORK_ADDRESS=192.168.13.0
	NETWORK_SIZE=254
	TYPE=RANGED

....: Leases :....

Guest OS Configuration

To be able to get information about the IP of each interface and how to configure networking we have developed a small script that should be executed at boot time before the network goes up. Here is the script:

#!/bin/bash
 
# Gets IP address from a given MAC
mac2ip() {
    mac=$1
 
    let ip_a=0x`echo $mac | cut -d: -f 3`
    let ip_b=0x`echo $mac | cut -d: -f 4`
    let ip_c=0x`echo $mac | cut -d: -f 5`
    let ip_d=0x`echo $mac | cut -d: -f 6`
 
    ip="$ip_a.$ip_b.$ip_c.$ip_d"
 
    echo $ip
}
 
# Gets the network part of an IP
get_network() {
    IP=$1
 
    echo $IP | cut -d'.' -f1,2,3
}
 
get_interfaces() {
    IFCMD="/sbin/ifconfig -a"
 
    $IFCMD | grep ^eth | sed 's/ *Link encap:Ethernet.*HWaddr /-/g'
}
 
get_dev() {
    echo $1 | cut -d'-' -f 1
}
 
get_mac() {
    echo $1 | cut -d'-' -f 2
}
 
gen_hosts() {
    NETWORK=$1
    echo "127.0.0.1 localhost"
    for n in `seq -w 01 99`; do
        n2=`echo $n | sed 's/^0*//'`
        echo ${NETWORK}.$n2 cluster${n}
    done
}
 
gen_exports() {
    NETWORK=$1
    echo "/images ${NETWORK}.0/255.255.255.0(rw,async,no_subtree_check)"
}
 
gen_hostname() {
    MAC=$1
    NUM=`mac2ip $MAC | cut -d'.' -f4`
    NUM2=`echo 000000$NUM | sed 's/.*\(..\)/\1/'`
    echo cluster$NUM2
}
 
gen_interface() {
    DEV_MAC=$1
    DEV=`get_dev $DEV_MAC`
    MAC=`get_mac $DEV_MAC`
    IP=`mac2ip $MAC`
    NETWORK=`get_network $IP`
 
    cat <<EOT
auto $DEV
iface $DEV inet static
  address $IP
  network $NETWORK.0
  netmask 255.255.255.0
EOT
 
    if [ $DEV == "eth0" ]; then
      echo "  gateway $NETWORK.1"
    fi
 
echo ""
}
 
 
IFACES=`get_interfaces`
 
for i in $IFACES; do
    MASTER_DEV_MAC=$i
    DEV=`get_dev $i`
    MAC=`get_mac $i`
    IP=`mac2ip $MAC`
    NETWORK=`get_network $IP`
done
 
# gen_hosts $NETWORK > /etc/hosts
 
# gen_exports $NETWORK  > /etc/exports
 
# gen_hostname $MAC  > /etc/hostname
 
(
cat <<EOT
auto lo
iface lo inet loopback
 
EOT
 
for i in $IFACES; do
    gen_interface $i
done
) > /etc/network/interfaces
 
# /bin/hostname `cat /etc/hostname`

Note: You may have noticed that some functions to generate /etc/hosts, /etc/exports and set hostname are commented. Here we will only cover interface configuration but you can also use that functions to configure more things at boot time.

This script should reside in /etc/init.d/vmcontext and you may also create a link in /etc/rcS.d to this script so it is run before the network is set up. Be careful selecting the order, it must go just before starting networking (and also after mounting local filesystems so it can write to files).

Virtual Machine Description

Now that we have OpenNebula and VM Image prepared the next step is to describe and send new VM instances. In this example we are going to create a VM that has two interfaces, one public and the other private:

NAME=master
 
MEMORY=512
CPU=1
 
OS=[kernel=/images/vmlinuz,
    initrd=/images/initrd.img,
    root=hda1,
    kernel_cmd="ro xencons=tty console=tty1"]
 
DISK=[source=/images/virtual-cluster/vcluster-master.img,
      clone=yes,
      target=hda,
      readonly=no]
 
NIC = [NETWORK="public"]
NIC = [NETWORK="private",IP="192.168.13.1"]

There are two NIC lines that define two interfaces. Interfaces are defined in order so the first one will be for eth0 and the second for eth1 in the VM.

  • eth0: it is located in virtual network named public and does not have an IP explicitly defined. OpenNebula will select an IP from the public pool of IP's previously defined. In this case there is only one IP defined for that network so we already know what IP will it have. You can also manually select the IP as in eth1.
  • eth1: network is private and with a fixed IP. This is done as the machine will be the head of a cluster and we want it to have an already known IP.

In this other example we are going to create a VM definition for a node that will only have one interface in the private network:

NAME=node
 
MEMORY=512
CPU=1
 
OS=[kernel=/images/vmlinuz,
    initrd=/images/initrd.img,
    root=hda1,
    kernel_cmd="ro xencons=tty console=tty1"]
 
DISK=[source=/images/virtual-cluster/vcluster-node.img,
      clone=yes,
      target=hda,
      readonly=no]
 
NIC = [NETWORK="private"]

As you can see there is only one NIC line and the IP will be automatically selected. This way you can bring up multiple VM's with this template and each will have its network automatically configured in the private network without colliding.

Now we are going to run a master and two node machines.

$ onevm create master.one
$ onevm create node.one
$ onevm create node.one

Now the machines are known to OpenNebula server so they will be scheduled to a node to run. We are going to see how things are after the machines are deployed.

$ onevm list
  ID     NAME STAT CPU     MEM        HOSTNAME        TIME
   0   master runn   1  524288          ursa05 01 20:50:03
   1     node runn   0  524288          ursa01 01 19:44:59
   2     node runn   0  524288          ursa01 01 19:44:56

$ onevnet show public
NID               : 0
UID               : 0
Network Name      : public
Type              : Fixed
Size              : 1
Bridge            : eth1

....: Template :....
	BRIDGE=eth1
	LEASES=IP=147.96.80.185
	NAME=public
	TYPE=FIXED

....: Leases :....
IP = 147.96.80.185  MAC = 00:02:93:60:50:b9  USED = 1 VID = 0

$ onevnet show private
NID               : 1
UID               : 0
Network Name      : private
Type              : Ranged
Size              : 256
Bridge            : eth0

....: Template :....
	BRIDGE=eth0
	NAME=private
	NETWORK_ADDRESS=192.168.13.0
	NETWORK_SIZE=254
	TYPE=RANGED

....: Leases :....
IP = 192.168.13.1   MAC = 00:02:c0:a8:0d:01  USED = 1 VID = 0
IP = 192.168.13.2   MAC = 00:02:c0:a8:0d:02  USED = 1 VID = 1
IP = 192.168.13.3   MAC = 00:02:c0:a8:0d:03  USED = 1 VID = 2

In public network the IP now is assigned to VM (VID) 0, that is, master and IP's in private network are as follows:

  • 192.168.13.1: master (VID 0)
  • 192.168.13.2: node (VID 1)
  • 192.168.13.3: node (VID 2)