Creating Customized Images

One of the steps when preparing an OpenNebula installation is the creation of Virtual Machine images for base Operating Systems or appliances. Some of these images can be downloaded from the marketplace but you may need an OS that is not in the marketplace or the images must be customized in some other way.

I’m going to describe an automated way to customize the base images provided by the Linux distributions using the software libguestfs.

The software libguestfs comes with tools to create and modify Virtual Machine images in a number of formats that qemu understands. Some of these utilities let us add or delete files inside the images or execute scripts using the image filesystem as root.

The first step is getting an image from the distribution web page. I usually get these images as they are very small and don’t have extra software. For this example we will use CentOS 7. Head to http://cloud.centos.org/centos/7/images/ and download the image CentOS-7-x86_64-GenericCloud.qcow2c.

One of the customizations we have to do to this image is uninstall the cloud-init package that comes by default with that image and install OpenNebula context package. The easiest way to install extra packages that are not in a repository is to add them into a CDROM that will provided to the customization tool. So head to https://github.com/OpenNebula/addon-context-linux/releases and download the latest context package.

To create the CDROM image we can use genisoimage. Remember to add a label so it’s easier to mount. Here we are going to use the label PACKAGES:

  • Copy the packages to a directory, for example packages
  • Execute genisoimage to create the iso that contains those files:
$ genisoimage -o packages.iso -R -J -V PACKAGES packages/

Now we need to prepare a script with the customizations to be done in the image. For example:

mount LABEL=PACKAGES /mnt

# Install opennebula context package
rpm -Uvh /mnt/one-context*rpm

# Remove cloud-init and NetworkManager
yum remove -y NetworkManager cloud-init

# Install growpart and upgrade util-linux, used for filesystem resizing
yum install -y epel-release --nogpgcheck
yum install -y cloud-utils-growpart --nogpgcheck
yum upgrade -y util-linux --nogpgcheck

# Install ruby for onegate tool
yum install -y ruby

Instead of modifying the original image downloaded we can use a feature of qcow2 image that is creating a new image that is based on another one. This way we keep the original image in case we are not happy with the modifications or we want to create another image with different customizations.

$ qemu-img create -f qcow2 -b CentOS-7-x86_64-GenericCloud.qcow2c centos.qcow2

Now all is prepared to customize the image. The command we are going to use is virt-customize. It can do a lot of modifications to the image but we are only going to do two. Execute the previous script and disable root password, just in case. The command is this one:

$ virt-customize -v --attach packages.iso --format qcow2 ---attach centos.qcow2 ---run script.sh -root-password disabled

It attaches two images, the iso image with the packages and the OS hard disk, executes script.sh that we previously created and disables root password.

After the command is run the image centos.qcow2 contains the modifications we did to the original image. Now we can convert it to any other format we need (for example vmdk) or to a full qcow2 image, that it, does not depend on any other one. Here are the commands to convert it to qcow2 (compatible with old qemu versions) and vmdk:

$ qemu-img convert -f qcow2 -O qcow2 -o compat=0.10 centos.qcow2 centos-final.qcow2
$ qemu-img convert -f qcow2 -O vmdk centos.qcow2 centos-final.vmdk

There are other customizations you can do, for example set a fixed password with --root-password password:r00tp4ssw0rd. You can also use virt-sparsify to discard the blocks that are not used by the filesystem. Check the libguestfs web page to learn about all the possibilities.

You can also take a look at the presentation I gave about this topic in the CentOS dojo held in Barcelona this year:

https://speakerdeck.com/jfontan/customizing-virtual-machine-images

New Context Packages With Support For EC2 Instances

Alongside the new maintenance release of OpenNebula (4.14.2) we have new context packages with some new features:

  • Contextualize SSH user with $USERNAME variable: Now you can specify which user will have the ssh key injected in authorized_keys
  • Contextualization can be used in EC2 instances
  • OneGate token is now retrieved from VMwareTools and the EC2 metadata server, enabling instances running in vCenter and EC2 to interact with OneGate.

From now on the context packages are splitted in two, the version you’ll use in kvm, vcenter or xen VMs and other for EC2 instances.

You can download these packages in its github release page.

New Contextualization Packages

A new version of OpenNebula context packages is out. Don’t be fooled by its version number. Even if it’s 4.14 is compatible with previous versions. There’s only one feature that needs OpenNebula 4.14 support.

In this release we a couple of contributions:

The other nice feature that you can use is that you can add start scripts without registering a file. To use this feature there are two new variables:

  • START_SCRIPT: the commands to run when the VM starts
    START_SCRIPT_BASE64: the same as START_SCRIPT but in base64
  • When context finds one of these variables it creates a new file that contains its value. Sets the executable bit, changes the directory to the context CD mount point and starts it. As it is executed by the context scripts all the variables in the context section are set in the environment.

The script does not need to be shell script, you can add a shebang and execute python, ruby or even binaries! (not recommended).

Some examples:

START_SCRIPT="yum update; yum install epel-release"
START_SCRIPT="
yum install nginx
systemctl start nginx
systemctl enable nginx
curl --data IP=$NIC[IP, NETWORK=\"public net\"] http://myserver/i_am_alive
"
START_SCRIPT="#!/usr/bin/env ruby

require 'open-uri'

open('http://myserver/configuration') do |s|
  conf = s.read
end
"

The other feature that requires the next version of OpenNebula is a new command to make the usage of OpenNebula gate easier. But we will talk about this in other post.

Go and download the new version while it’s fresh!

https://github.com/OpenNebula/addon-context-linux/releases/tag/v4.14.0

New OpenNebula Maintenance Relase 4.10.2

A new maintenance release for OpenNebula 4.10 is available. There are several fixes and improvements since 4.10.1 and upgrade is recommended. Some of the changes are as follows:

For a more exhaustive list you can check the issue tracker.

You can download the packages from the following URL or use the package repositories to update your OpenNebula installation:

http://opennebula.org/software/

Make sure you backup your configuration and happy upgrading!

OpenNebulaConf2014: Puppet and OpenNebula by Puppet Labs’ David Lutterkort

David Lutterkort, Principal Engineer at Puppet Labs, will give a keynote entitled “Puppet and OpenNebula” in the upcoming OpenNebulaConf 2014 to be held in Berlin on the 2-4 of December.

This talk will show how Puppet can be used by adminsitrators to manage OpenNebula hosts, and by users to manage their infrastructure as well as how to use Puppet during image builds. Many facets of using an IaaS cloud like OpenNebula can be greatly simplified by using a configuration management tool such as Puppet. This includes the management of hosts as well as the management of cloud resources such as virtual machines and networks. Of course, Puppet can also play an important role in the management of the actual workload of virtual machine instances. Besides using it in the traditional, purely agent-based way, it is also possible to use Puppet during the building of machine images. This serves two purposes: firstly, it speeds up the initial Puppet run when an instance is launched off that image, sometimes quite dramatically. Secondly, it supports operating immutable infrastructure without losing Puppet’s benefits to organize and simplify the description of the entire infrastructure.

David is a principal engineer at Puppet Labs and the technical lead for Puppet Labs’ development of Razor. Before joining Puppet Labs, David worked at Red Hat on a variety of management tools and served as the maintainer of Apache Deltacloud. He was one of the earliest contributors to Puppet, and is the main author of the Augeas configuration management tool.

Do not miss this talk, register now, only a few seats are left!

OpenNebula at CloudOpen Europe

Next week the Linux Foundation conferences LinuxCon + CloudOpen + ELC-E Europe 2014 will take place in Dusseldorf. I’ll be there Wednesday 15th at CloudOpen to give a talk about the OpenNebula cloud provisioning model and a two-hour hands-on tutorial on building clouds with OpenNebula. Here are the links to my talks:

If you are there and have questions or want to talk about OpenNebula or any other topic you can meet me before or after the sessions. You can reach me at my twitter account @thevaw or with my email address (jfontan AT this domain) if you want to plan ahead.

See you in Dusseldorf!

OpenNebula 4.8 beta released!

The OpenNebula team is really happy to release the first beta for version 4.8 (4.7.80). In this version, alongside several fixes, we have been working on some new features:

  • Improvements to the Cloud View interface like OneFlow integration
  • New VDC admin view that matches the Cloud View.
  • New virtual network model that make its configuration and management more flexible with address ranges.
  • IP reservation.
  • Network interface default configuration
  • Quotas can now specify a value of 0 to disable certain objects for users or groups.
  • Logs now have the zone ID so its easier to parse in a centralized syslog configuration.
  • New datastore to use local block devices.
  • Inter datastore image clone.
  • Support for RBD format 2 in CEPH drivers
  • IO throttling for disk devices.
  • New hybrid drivers for Microsoft Azure and IBM Softlayer services.
  • OneGate can now be used to get information about all the VMs in a service.
  • OneFlow can wait until a VM phones home before starting the rest of VMs.
  • Network configuration in a flow can be specified per role.
  • User input on template instantiation for certain VM parameters.
  • Default view for a group in Sunstone.
  • Instantiate VMs on hold.
  • Boot order can be selected from Sunstone.

You can find more information about the new features in the release notes.

In this new release we also start supporting RHEL/CentOS 7. We encourage everyone that is using or planning to use this distributions to try the new packages and fill any bugs found in them.

We have also created new repositories for this release so its easier to install and your 4.6 installations don’t upgrade automatically to it.

You can download the packages from the software page or use the new repositories. Now is the time to try it and fill bugs so we can fix them before the final release.

This new release code name is “Lemon Slice“. From Wikipedia:

lemon_slice

The Lemon slice nebula, also known as IC 3568, is a planetary nebula that is 1.3 kiloparsecs (4500 ly) away from Earth in the constellation of Camelopardalis (just 7.5 degrees from Polaris). It is a relatively young nebula and has a core diameter of only about 0.4 light years. The Lemon slice nebula is one of the most simple nebulae known, with an almost perfectly spherical morphology. It appears very similar to a lemon, for which it is named. The core of the nebula does not have a distinctly visible structure in formation and is mostly composed of ionized helium. The central star is a very hot and bright asymptotic red giant, and can be seen as a red-orange hue in an amateur’s telescope. A faint halo of interstellar dust surrounds the nebula.

Thank you all for the input, patches and bug reports that made this release possible.

OpenNebula Carina 4.6.2 Released

We have just released a new maintenance release 4.6.2. This time it
does not come with new features, just bug fixes.

One of them is a security vulnerability in Sunstone and you should
upgrade your installation. Thanks to Dennis Felsch and Mario
Hei­de­rich from Horst Görtz Institute for IT-Security,
Ruhr-University Bochum for telling us about it.

Other fixes are as follows:

In case you have 4.6.1 installed the upgrade is straightforward as the
config files have not changed.

As always, make sure you read the upgrade guide before applying the new release.

Automatic configuration of VMs with Puppet

OpenNebula contextualization is a system that writes VM configuration parameters into a CDROM image and a package installed in the VMs that is able to configure the system using this data. By default comes with scripts to set the network configuration (IP, DNS), hostname, allowed ssh keys, etc. You can even easily create your own version of the packages with new scripts that configure other parts of the system as stated in the documentation. Still, if you don’t want to create you own context packages you can specify scripts to be started at boot time. In this post we will provide an example on how to use this system to prepare the machine to be configured with Puppet but these tips are useful for any other CMS.

The requisites for this example are:

  • An already installed Puppet master in a network reachable by your VMs
  • CentOS 6.x base image with context package >= 4.4 and internet connection

To make the VM be configured as soon as the Puppet agent is started you can change /etc/puppet/puppet.conf in the Puppet master machine and set autosign = true in main section (remember to restart the daemon). This way you wont need to sign the certificates manually:

[main]
autosign = true

In case you are not using autosign you should use the puppet cert command to sign new host certificates and wait until the Puppet agent in those nodes wakes up again. By default they do it every 30 minutes.

The installation and configuration of Puppet agent in the nodes can be done with the aforementioned init scripts. We can add this script to the files datastore. I’ve called it puppet_centos:

#!/bin/bash

PUPPET_MASTER_NAME=puppet.opennebula.org
PUPPET_MASTER_IP=10.0.0.2

if [ -z "$NODE_NAME" ]; then
    NODE_NAME=$(hostname)
fi

# Add node to /etc/hosts
echo "$ETH0_IP    $NODE_NAME" >> /etc/hosts

# Add puppet server to /etc/hosts
echo "$PUPPET_MASTER_IP    $PUPPET_MASTER_NAME" >> /etc/hosts

# Install puppetlabs repo (for latest packages)
rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm

# Install puppet agent package
yum install -y puppet

cat << EOF > /etc/puppet/puppet.conf
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
daemon          = false
ca_server       = $PUPPET_MASTER_NAME
certname        = $NODE_NAME
environment     = production
server          = $PUPPET_MASTER_NAME
EOF

# Enable puppet agent
puppet resource service puppet ensure=running enable=true

Make sure you change Puppet master IP and name.

Now in the template for the new VM you will have to add some bits in the context section:

  • puppet_centos script in files (FILES_DS) section
  • set the “init scripts” value to puppet_centos
    puppet-context-files
  • add a new variable called NODE_NAME set to $NAME-$VMID. This way the node name for the VM will be the same as the OpenNebula VM name.
    puppet-context-custom-vars

If you are using the command line the context section will be something similar to this:

CONTEXT=[
  FILES_DS="$FILE[IMAGE=puppet_centos]",
  INIT_SCRIPTS="puppet_centos",
  NETWORK="YES",
  NODE_NAME="$NAME-$VMID",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]

Now we have most of the bits needed to do the automatic configuration of the VMs after boot. It is only needed to add configuration to the nodes.

Since we are working with Virtual Machines we won’t know beforehand the name/IP of the new VMs that we can refer to when selecting the role of each one. To overcome this limitation, and taking advantage of OpenNebula name generation, we can define the node names in Puppet master with regular expressions so we can tell the roll of these VMs. For example, in /etc/puppet/manifests/site.pp we can define this node:

node /^www-\d+/ {
    include apache
}

Now when instantiating the template we can provide the name www. OpenNebula will add the VM ID to the certname so we will have www-15, www-16 and www-17, for example. All these node names will match the regular expression and install apache.

puppet-instantiate

In case you are using the command line you can use this line, changing centos_template by the name or ID of your template and 3 by the number of VMs you want to instantiate:

$ onetemplate instantiate centos_template -m 3 --name www

Native GlusterFS Image Access for KVM Drivers

GlusterFS is a distributed filesystem with replica and storage distribution features that come really handy for virtualization. This storage can be mounted as a filesystem using NFS or FUSE adapter for GlusterFS and is used as any other shared filesystem. This way of using it is very convenient as it works the same way as other filesystems, still it has the overhead of NFS or FUSE.

The good news is that for some time now qemu and libvirt have native support for GlusterFS. This makes possible for VMs running from images stored in Gluster to talk directly with its servers making the IO much faster.

The integration was made to be as similar as possible to the shared drivers (in fact uses shared tm and fs datastore drivers). Datastore management like image registration or cloning still use FUSE mounted filesystem so OpenNebula administrators will feel at home with it.

GlusterFS Arch

This feature is headed for 4.6 and is already in the git repository and the documentation. Basically the configuration to use this integration is as follows.

  • Configure the server to allow non root user access to Gluster. Add this line to ‘/etc/glusterfs/glusterd.vol’:

    option rpc-auth-allow-insecure on

    And execute this command:

    # gluster volume set <volume> server.allow-insecure on

  • Set the ownership of the files to ‘oneadmin’:

    # gluster volume set <volume> storage.owner-uid=<oneadmin uid>
    # gluster volume set <volume> storage.owner-gid=<oneadmin gid>

  • Mount GlusterFS using FUSE at some point in your frontend:

    # mkdir -p /gluster
    # chown oneadmin:oneadmin /gluster
    # mount -t glusterfs <server>:/<volume> /gluster

  • Create shared datastores for images and system and add these extra parameters to the images datastore:

    DISK_TYPE = GLUSTER
    GLUSTER_HOST = <gluster_server>:24007
    GLUSTER_VOLUME = <volume>
    CLONE_TARGET="SYSTEM"
    LN_TARGET="NONE"

  • Link the system and images datastore directories to the GlusterFS mount point:

    $ ln -s /gluster /var/lib/one/datastores/100
    $ ln -s /gluster /var/lib/one/datastores/101

Now when you start a new VM you can check that in the deployment file it points to the server configured in the datastore. Another nice feature is that storage will fall back to a secondary server in case one of them crashes. The information about replicas is automatically gathered, there is no need to add more than one host (and is currently not supported in libvirt).

If you are interested in this feature it’s a good time to download and compile master branch to test this feature. There is still some time until a release candidate of 4.6 comes out but we’d love to have some feedback as soon as possible to fix any problems that it may have.

We want to thank Netways people for helping us with this integration and testing of the qemu/gluster interface, and to John Mark from the Gluster team for his technical assistance.