Automatic configuration of VMs with Puppet

OpenNebula contextualization is a system that writes VM configuration parameters into a CDROM image and a package installed in the VMs that is able to configure the system using this data. By default comes with scripts to set the network configuration (IP, DNS), hostname, allowed ssh keys, etc. You can even easily create your own version of the packages with new scripts that configure other parts of the system as stated in the documentation. Still, if you don’t want to create you own context packages you can specify scripts to be started at boot time. In this post we will provide an example on how to use this system to prepare the machine to be configured with Puppet but these tips are useful for any other CMS.

The requisites for this example are:

  • An already installed Puppet master in a network reachable by your VMs
  • CentOS 6.x base image with context package >= 4.4 and internet connection

To make the VM be configured as soon as the Puppet agent is started you can change /etc/puppet/puppet.conf in the Puppet master machine and set autosign = true in main section (remember to restart the daemon). This way you wont need to sign the certificates manually:

[main]
autosign = true

In case you are not using autosign you should use the puppet cert command to sign new host certificates and wait until the Puppet agent in those nodes wakes up again. By default they do it every 30 minutes.

The installation and configuration of Puppet agent in the nodes can be done with the aforementioned init scripts. We can add this script to the files datastore. I’ve called it puppet_centos:

#!/bin/bash

PUPPET_MASTER_NAME=puppet.opennebula.org
PUPPET_MASTER_IP=10.0.0.2

if [ -z "$NODE_NAME" ]; then
    NODE_NAME=$(hostname)
fi

# Add node to /etc/hosts
echo "$ETH0_IP    $NODE_NAME" >> /etc/hosts

# Add puppet server to /etc/hosts
echo "$PUPPET_MASTER_IP    $PUPPET_MASTER_NAME" >> /etc/hosts

# Install puppetlabs repo (for latest packages)
rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm

# Install puppet agent package
yum install -y puppet

cat << EOF > /etc/puppet/puppet.conf
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
daemon          = false
ca_server       = $PUPPET_MASTER_NAME
certname        = $NODE_NAME
environment     = production
server          = $PUPPET_MASTER_NAME
EOF

# Enable puppet agent
puppet resource service puppet ensure=running enable=true

Make sure you change Puppet master IP and name.

Now in the template for the new VM you will have to add some bits in the context section:

  • puppet_centos script in files (FILES_DS) section
  • set the “init scripts” value to puppet_centos
    puppet-context-files
  • add a new variable called NODE_NAME set to $NAME-$VMID. This way the node name for the VM will be the same as the OpenNebula VM name.
    puppet-context-custom-vars

If you are using the command line the context section will be something similar to this:

CONTEXT=[
  FILES_DS="$FILE[IMAGE=puppet_centos]",
  INIT_SCRIPTS="puppet_centos",
  NETWORK="YES",
  NODE_NAME="$NAME-$VMID",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]

Now we have most of the bits needed to do the automatic configuration of the VMs after boot. It is only needed to add configuration to the nodes.

Since we are working with Virtual Machines we won’t know beforehand the name/IP of the new VMs that we can refer to when selecting the role of each one. To overcome this limitation, and taking advantage of OpenNebula name generation, we can define the node names in Puppet master with regular expressions so we can tell the roll of these VMs. For example, in /etc/puppet/manifests/site.pp we can define this node:

node /^www-\d+/ {
    include apache
}

Now when instantiating the template we can provide the name www. OpenNebula will add the VM ID to the certname so we will have www-15, www-16 and www-17, for example. All these node names will match the regular expression and install apache.

puppet-instantiate

In case you are using the command line you can use this line, changing centos_template by the name or ID of your template and 3 by the number of VMs you want to instantiate:

$ onetemplate instantiate centos_template -m 3 --name www

Native GlusterFS Image Access for KVM Drivers

GlusterFS is a distributed filesystem with replica and storage distribution features that come really handy for virtualization. This storage can be mounted as a filesystem using NFS or FUSE adapter for GlusterFS and is used as any other shared filesystem. This way of using it is very convenient as it works the same way as other filesystems, still it has the overhead of NFS or FUSE.

The good news is that for some time now qemu and libvirt have native support for GlusterFS. This makes possible for VMs running from images stored in Gluster to talk directly with its servers making the IO much faster.

The integration was made to be as similar as possible to the shared drivers (in fact uses shared tm and fs datastore drivers). Datastore management like image registration or cloning still use FUSE mounted filesystem so OpenNebula administrators will feel at home with it.

GlusterFS Arch

This feature is headed for 4.6 and is already in the git repository and the documentation. Basically the configuration to use this integration is as follows.

  • Configure the server to allow non root user access to Gluster. Add this line to ‘/etc/glusterfs/glusterd.vol’:

    option rpc-auth-allow-insecure on

    And execute this command:

    # gluster volume set <volume> server.allow-insecure on

  • Set the ownership of the files to ‘oneadmin’:

    # gluster volume set <volume> storage.owner-uid=<oneadmin uid>
    # gluster volume set <volume> storage.owner-gid=<oneadmin gid>

  • Mount GlusterFS using FUSE at some point in your frontend:

    # mkdir -p /gluster
    # chown oneadmin:oneadmin /gluster
    # mount -t gluster <server>:/<volume> /gluster

  • Create shared datastores for images and system and add these extra parameters to the images datastore:

    DISK_TYPE = GLUSTER
    GLUSTER_HOST = <gluster_server>:24007
    GLUSTER_VOLUME = <volume>
    CLONE_TARGET="SYSTEM"
    LN_TARGET="NONE"

  • Link the system and images datastore directories to the GlusterFS mount point:

    $ ln -s /gluster /var/lib/one/datastores/100
    $ ln -s /gluster /var/lib/one/datastores/101

Now when you start a new VM you can check that in the deployment file it points to the server configured in the datastore. Another nice feature is that storage will fall back to a secondary server in case one of them crashes. The information about replicas is automatically gathered, there is no need to add more than one host (and is currently not supported in libvirt).

If you are interested in this feature it’s a good time to download and compile master branch to test this feature. There is still some time until a release candidate of 4.6 comes out but we’d love to have some feedback as soon as possible to fix any problems that it may have.

We want to thank Netways people for helping us with this integration and testing of the qemu/gluster interface, and to John Mark from the Gluster team for his technical assistance.

First Add-ons in OpenNebula

The new OpenNebula Add-ons initiative has received an enthusiastic response from the community, ranging from individual developers and research centers to corporations. In only two days we already have three Add-ons ready to download:

2 Add-ons being now created:

and several others, like LXC support, under discussion.

You can contribute code to any of these add-ons, make a new add-on, or join the discussions in the development mailing list.

We are looking forward to your participation!

OpenNebulaConf Hacking Session

The first day of the conference we are going to have a couple of activities that I’m sure you’ll be interested in. There is one tutorial for people that wants to learn how to deploy and use OpenNebula and in parallel we will have free form hacking session.

This hacking session is meant for people that already has OpenNebula deployed and knows how to use it. There you can catch up with OpenNebula developers and have conversations that are a bit hard to have in the mailing list. It is also a great place to meet other people that may be doing things similar to you or have already sorted out some of the problems you may have. Here are some ideas on what you can do in the hacking session:

  • Ask about some new feature that is coming in new releases
  • Get help modifying Sunstone interface for your company
  • Integrate your billing system with OpenNebula accounting
  • Create a new Transfer Manager driver that knows how to talk to your SAN
  • Migrate a driver you’ve made for an old OpenNebula version to the newest one
  • Optimize your OpenNebula deployment

But you can also help us with the project! For example:

  • Discuss about some feature you may want to have included
  • Help improving or developing a new feature for OpenNebula
  • Give advice or add new documentation
  • Bug hunting!

This session will be held the first day (September 24) from 2pm to 6pm but we will be available the whole conference. In case there’s no time the first day or you want to talk to us any other day just come and say hi!

See you in Berlin!

New OpenNebula Package Repositories

Until now the way to install the latest version of OpenNebula was downloading the packages from our web page and installing them manually or compiling the sources. We have created package repositories for CentOS, Ubuntu, Debian and openSUSE to make the installation and upgrade even easier. Latest contextualization packages for CentOS/RedHat, Ubuntu and Debian are also located in those repositories.

The instructions to add the repositories and install the frontend are as follows, execute as root.

CentOS 6.4

# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://opennebula.org/repo/CentOS/6/stable/$basearch
enabled=1
gpgcheck=0
EOT
# yum install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow

Ubuntu and Debian

Change DISTRIBUTION to Ubuntu/12.04, Ubuntu/13.04 or Debian/7

# wget http://opennebula.org/repo/Debian/repo.key
# apt-key add repo.key
# echo "deb http://opennebula.org/repo/DISTRIBUTION stable opennebula" > /etc/apt/sources.list.d/opennebula.list
# apt-get update
# apt-get install opennebula opennebula-flow opennebula-gate opennebula-tools opennebula-sunstone

openSUSE

# zypper ar -f -n packman http://packman.inode.at/suse/openSUSE_12.3 packman
# zypper addrepo --no-gpgcheck --refresh -t YUM http://opennebula.org/repo/openSUSE/12.3/stable/x86_64 opennebula
# zypper refresh
# zypper install opennebula opennebula-zones opennebula-sunstone

OpenNebula and Foreman integration

Our team is in the process of rearchitecting the test and develop infrastructure and we needed a way to easily install new OSs. This installation will be done in both physical nodes and Virtual Machines. To do this we selected The Foreman as the installation server.

For physical nodes we use the standard foreman workflow where we add a new host, select its OS and install the OS. For virtual machines we wanted it to be a bit more flexible and control it from OpenNebula itself. The idea is to configure the different operating systems in foreman and let our developers select the OS that was going to be installed in the machine.

To do this we have a hook that is able to communicate with foreman and register new hosts in foreman when a VM with certain parameters is created. The parameters that we can add in the template are these ones:

  • FOREMAN_OS_ID: Operating System identifier in foreman
  • FOREMAN_SUBNET: Network where the VM is going to start

The subnet is provided as we have two networks in our infrastructure. The hook will only run when FOREMAN_OS_ID is found in the template.

This is the hook we have added to OpenNebula. Bear in mind that this is a work in progress and we want to make it more straight forward for the user, like selecting the OS by its name and not a number.

[code language="ruby"]
#!/usr/bin/env ruby

# Add OpenNebula ruby library path. Alternatively you can install OpenNebula
# ruby gem
$: << '/usr/lib/one/ruby'

require 'rubygems'
require 'foreman_api'
require 'opennebula'
require 'base64'
require 'rexml/document'

# Parameters received by the hook from OpenNebula
ID=ARGV[0]
TEMPLATE_ENCODED=ARGV[1]

# Log file for script debugging
LOG=File.open('/tmp/hook.log', 'w+')

# Change your credentials and endpoint here
CREDENTIALS={
:base_url => 'http://foreman',
:username => 'admin',
:password => 'amazingly_strong_password'
}

# In our infrastructure we have two network, here are the IDs of these networks
# in foreman
SUBNETS={
'building' => 1,
'internal' => 2
}

# There are some values hardcoded for the VMs as we don't use many different
# parameters but these can also be changed
def create_foreman_host(params ={})
host = ForemanApi::Resources::Host.new(CREDENTIALS)

description={
"host" => {
:name => params[:name],
:mac => params[:mac],
:ip => params[:ip],
:architecture_id => 1, # x86_64
:environment_id => 1, # production
:domain_id => 1, # local
:subnet_id => params[:subnet_id],
:operatingsystem_id => params[:os_id].to_i,
:puppet_proxy_id => 1, # Only one proxy
:hostgroup_id => 1, # We only have one hostgroup
:build => 1, # Enable VM building
:ptable_id => params[:ptable_id],
:medium_id => params[:medium_id]
}
}

host.create(description)
end

def get_foreman_os(id)
os = ForemanApi::Resources::OperatingSystem.new(CREDENTIALS)
res = os.index

res[0].select {|o| o["operatingsystem"]["id"]==id }[0]["operatingsystem"]
end

@client=OpenNebula::Client.new

template_decoded=Base64.decode64(TEMPLATE_ENCODED)
xml=Nokogiri::XML(template_decoded)

vm=OpenNebula::VirtualMachine.new(xml, @client)

LOG.puts vm.inspect

os_id=vm['VM/USER_TEMPLATE/FOREMAN_OS_ID']
subnet_name=vm['VM/USER_TEMPLATE/FOREMAN_SUBNET']

# We only execute the hook when FOREMAN_OS_ID is set in the VM template
exit(0) if !os_id

os=get_foreman_os(os_id.to_i)

# We need to fill medium and ptable values from OS parameters as Foreman uses
# the values from the hostgroup
medium=os['media'][0]['medium']['id']
ptable=os['ptables'][0]['ptable']['id']

subnet=1

subnet=SUBNETS[subnet_name] if SUBNETS[subnet_name]

# Fill VM parameters
info={
:name => vm['VM/NAME'],
:ip => vm['VM/TEMPLATE/NIC/IP'],
:mac => vm['VM/TEMPLATE/NIC/MAC'],
:subnet_id => subnet,
:os_id => os_id,
:medium_id => medium,
:ptable_id => ptable
}

LOG.puts create_foreman_host(info).inspect

# Chill out a bit an let foreman do its job
sleep 5

vm=OpenNebula::VirtualMachine.new(
OpenNebula::VirtualMachine.build_xml(ID), @client)

# Release the VM hold so it can start
LOG.puts vm.release.inspect
[/code]

This hook requires foreman_api gem. Now we add this hook to OpenNebula configuration with this stanza:

[code language="bash"]
VM_HOOK = [
name = "foreman-create",
on = "CREATE",
command = "/var/lib/one/foreman_create_hook.rb",
arguments = "$ID $TEMPLATE" ]
[/code]

Now to create new VMs we have created an empty qcow2 image that will be used as the disk for new VMs. Making them qcow2 will let us clone them very fast and will be much smaller. We also have a template for all the VMs, something like this:

[code language="bash"]
OS=[
ARCH="x86_64",
BOOT="network" ]

CPU="1"
MEMORY="768"

DISK=[
IMAGE="empty_10gb_disk" ]
NIC=[
NETWORK="building" ]

GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="vnc" ]

FOREMAN_OS_ID="2" # in our case this is a Ubuntu 12.10
FOREMAN_SUBNET="building"
[/code]

The VM should be launched on hold so we have time to add the host to foreman and configuring DHCP and TFTP servers. At this time we can only do this using the CLI:

$ onetemplate instantiate foreman-base --hold

We can also change the OS to be installed without changing the template

$ onetemplate instantiate foreman-base --hold --raw FOREMAN_OS_ID=1

After the VM is created the hook kicks in, adds the new host to foreman and releases the VM from hold so it can start and be installed. When the installation procedure is finished we can start using the VM or capture it so we can use as a base for other VMs. To do this we can use a disk snapshot (not hot) and shutdown the machine to save the new image.

Things to take into account:

  • Add installation of OpenNebula contextualization packages in the Foreman templates so the images are ready to be used in OpenNebula
  • Configure puppet, chef or other CMS so the images can serve as basis for your app deployments

Features we want to add to the integration:

  • Select OS by name, not id
  • Select the subnet from the OpenNebula network so it does not need to be specified
  • Automatically hold the VM on startup so Sunstone can be used to install new VMs
  • New hook to delete the host from foreman after it is deleted in OpenNebula

You can find the code from this post in this gist.

Start Your New OpenNebula User Group!

The OpenNebula Project is happy to announce the support for the creation and operation of OpenNebula User Groups. An OpenNebula User Group is a gathering of our users in a local area to share best practices, discuss technical questions, network, and learn from each other.

If you are a passionate OpenNebula user and are interested in starting your own OpenNebula User Group, join our Community Discuss mailing list and let us know about your plans.

There is more information in the new User Groups section of our site.

We look forward to your User Group proposal!

Command Line Tweaks for OpenNebula 4.0

In the last post we’ve seen the beautiful new face of Sunstone. Even if we are putting lots of effort in the web interface we are also giving some love to the command line interface.

Until now the creation of images and templates from the command line consisted on creating a template file and feeding it to oneimage/onetemplate create command. There still exist that possibility but we can create simple images or VM templates using just command parameters.

For example, registering an image can be done with this command:

$ oneimage create -d default --name ttylinux \
--path http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000003/download
ID: 4

You can also pass a file to the path, but take into account that you need to configure the datastores SAFE_DIRS parameter to make it work.

We can also create image from scratch, for example, a raw image of 512 Mb that will be connected using virtio:

$ oneimage create --name scratch --prefix vd --type datablock --fstype raw \
--size 512m -d default
ID: 5

You can get more information on the parameters issuing oneimage create --help.

Creation of VM templates is also very similar. For example, creating a VM that uses both disks and a network, adding contextualization options and enabling VNC:

$ onetemplate create --name my_vm --cpu 4 --vcpu 4 --memory 16g \
--disk ttylinux,scratch --network network --net_context --vnc
ID: 1
$ onetemplate instantiate my_vm
VM ID: 10

The output of onevm show was also changed to show disks and nics in an easier to read fashion:

$ onevm show 10
VIRTUAL MACHINE 10 INFORMATION
ID : 10
NAME : my_vm-10

[...]

VM DISKS
 ID TARGET IMAGE                               TYPE SAVE SAVE_AS
  0    hda ttylinux                            file   NO       -
  1    vda scratch                             file   NO       -

VM NICS
 ID NETWORK                                IP               MAC VLAN BRIDGE
  0 network                       192.168.0.8 02:00:c0:a8:00:08   no vbr0

VIRTUAL MACHINE TEMPLATE
CONTEXT=[
  DISK_ID="2",
  ETH0_DNS="192.168.0.1",
  ETH0_GATEWAY="192.168.0.1",
  ETH0_IP="192.168.0.8",
  ETH0_MASK="255.255.255.0",
  TARGET="hdb" ]
CPU="4"
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="5910",
  TYPE="vnc" ]
MEMORY="16384"
TEMPLATE_ID="1"
VCPU="4"
VMID="10"

This way you can get useful information about the VM in just a glimpse. If you need more information you can still use -x option or the new --all, this will print all the information in the template a the previous versions.

oneimage show was also changed so you can check which VMs are using an image:

$ oneimage show scratch
IMAGE 5 INFORMATION
ID : 5
NAME : scratch

[...]

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
10 oneadmin oneadmin my_vm-10        pend    0      0K              0d 00h03
[/sourcecode]

This is also true for onevnet show:

[sourcecode language="text" gutter="false"][/sourcecode]
$ onevnet show network
VIRTUAL NETWORK 0 INFORMATION
ID : 0
NAME : network

[...]

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
 9 oneadmin oneadmin template1       pend    0      0K              0d 00h30
10 oneadmin oneadmin my_vm-10        pend    0      0K              0d 00h04

Another nice parameter is --dry. This parameter can be used with onetemplate and oneimage create. It will print the generated template but will not register it. It is useful when you want to create a complex template but don’t want to type it from scratch, just redirect it to a file and edit it to add some features not available from the command line.

One last thing, the parameters for onevm create are the exact same ones as onetemplate create. If you just want to create a fire and forget VM you can use onevm create the same way.

OpenNebula 4.0 will be available for testing, really soon. Until then, we will keep you updated with the new features in posts like this. You can also check the posts released in the last weeks about the Ceph integration, the new scheduling feature, and the new Sunstone.
Stay tuned!

New Contextualization Packages for OpenNebula 3.8

Some weeks ago with the creation of the OpenNebula Marketplace, we released contextualization packages to help prepare VM images. These packages did some work that previously we had to do manually:

  • Disable/delete udev net and cdrom persistent rules. On boot, linux distributions scan for new hardware and discovered network and cdrom are added to a file. This process is really useful for physical machines so adding or taking out a new network card wont change the name of the rest, making the configuration we had still useful. With virtual machines this is a nuisance. A simple MAC address change will make udev create a new device for that interface and the configuration will no longer be used.
  • Unconfigure network. This way the VM won’t configure the network before the OpenNebula contextualization kicks in.
  • Add contextualization scripts to startup. These scripts will configure the network and will call init.sh from the context cdrom enabling us to do some magic with the context section of the VM template.

One of the changes introduced in OpenNebula 3.8 is the new contextualization packages. The new version does the same as the previous one with some changes that we hope will make people creating images happier.

Modular Contextualization Scripts

Now the script launched on VM boot has less logic:

  • Mounts the context cdrom
  • Exports the variables from context.sh
  • Executes any script located in /etc/one-context.d
  • Executes init.sh from cdrom
  • Unmounts the cdrom

Network configuration is now a script located in /etc/one-context.d/00-network. Any file located in that directory will be executed on start, in alphabetical order. This way we can add any script to configure or start processes on boot. For example, we can have a script that populates authorized_keys file using a variable in the context.sh. Remember that those variables are exported to the environment and will be easily accessible by the scripts:

#!/bin/bash
echo "$SSH_PUBLIC_KEY" > /root/.ssh/authorized_keys 

 

Network Configuration Driven by Contextualization

The new network configuration scripts can still infer the network configuration from the MAC address of the VM, the same as the previous versions. The way OpenNebula generates MAC addresses by default is by setting the first 2 bytes of the MAC address to the prefix configured in oned.conf and the rest 4 bytes to the IP assigned. This method is convenient but lacks flexibility and some interesting parameters like the network mask or gateway information.

Other way we had to configure the network was adding a script to the contextualization cdrom using the file. This method is very flexible but most of the time we always configure the same network parameters so this script changes very rarely. Also, in new OpenNebula versions we discourage the use of contextualization file parameter as it can lead to security problems.

Now the network configuration script will search for some predefined environment variables to configure network parameters. The parameters are:

Attribute Description
<DEV>_IP IP assigned to the interface
<DEV>_NETWORK Interface network
<DEV>_MASK Interface net mask
<DEV>_GATEWAY Interface gateway

 

We will substitute <DEV> with the interface the variable refers to in uppercase, as in ETH0, ETH1, etc. As an example, we can have a network defined this way:

NAME=public
NETWORK_ADDRESS=80.0.0.0
NETWORK_MASK=255.255.255.0
GATEWAY=80.0.0.1

 

And then in the VM contextualization those parameters for eth0 can be expressed as:

CONTEXT=[
 ETH0_IP = "$NIC[IP, NETWORK=\"public\"]",
 ETH0_NETWORK = "$NIC[NETWORK_ADDRESS, NETWORK=\"public\"]",
 ETH0_MASK = "$NIC[NETWORK_MASK, NETWORK=\"public\"]",
 ETH0_GATEWAY = "$NIC[GATEWAY, NETWORK=\"public\"]"
]

 

Generation of Custom Contextualization Packages

OpenNebula source code comes with the scripts and the files needed to generate those packages. This way you can also generate custom packages tweaking the scripts that will go inside your images or adding new scripts that will perform other duties.

The files are located in share/scripts/context-packages:

  • base: files that will be in all the packages. Right now it contains empty udev rules and the init script that will be executed on startup.
  • base_<type>: files specific for linux distributions. It contains the contextualization scripts for the network and comes in rpm and deb flavors. You can add here your own contextualization scripts and they will be added to the package when you run the generation script.
  • generate.sh: The script that generates the packages.
  • postinstall: This script will be executed after the package installation and will clean the network and udevconfiguration. It will also add the init script to the started services on boot.

To generate the packages you will need:

  • Ruby >= 1.8.7
  • gem fpm
  • dpkg utils for deb package creation
  • rpm utils for rpm package creation

You can also give to the generation script some parameters using env variables to generate the packages. For example, to generate an rpm package you will execute:

$ PACKAGE_TYPE=rpm ./generate.sh 

 

These are the default values of the parameters, but you can change any of them the same way we did forPACKAGE_TYPE:

VERSION=3.7.80
MAINTAINER=C12G Labs <support@c12g.com>
LICENSE=Apache
PACKAGE_NAME=one-context
VENDOR=C12G Labs
DESCRIPTION="
This package prepares a VM image for OpenNebula:
 * Disables udev net and cd persistent rules
 * Deletes udev net and cd persistent rules
 * Unconfigures the network
 * Adds OpenNebula contextualization scripts to startup

To get support use the OpenNebula mailing list:
 http://opennebula.org/community:mailinglists"
PACKAGE_TYPE=deb
URL=http://opennebula.org

 

For more information check the README.md file from that directory.

Contextualization Packages for VMs

We know that creating new Virtual Appliances can be sometimes cumbersome. To help you creating them a new set of packages were developed so the preparation of these images to work with OpenNebula is a breeze. They are compatible with:

  • Ubuntu >= 11.x
  • Debian Squeeze
  • CentOS 6.x
  • RHEL 6.x

These packages will prepare udev rules so you wont have problems after the first start and will also add the contextualization scripts to configure the network and any other subsystem or software using the contextualization CDROM.

More information in the Contextualization Packages for VM Images guide.