OpenNebula 4.8 beta released!

The OpenNebula team is really happy to release the first beta for version 4.8 (4.7.80). In this version, alongside several fixes, we have been working on some new features:

  • Improvements to the Cloud View interface like OneFlow integration
  • New VDC admin view that matches the Cloud View.
  • New virtual network model that make its configuration and management more flexible with address ranges.
  • IP reservation.
  • Network interface default configuration
  • Quotas can now specify a value of 0 to disable certain objects for users or groups.
  • Logs now have the zone ID so its easier to parse in a centralized syslog configuration.
  • New datastore to use local block devices.
  • Inter datastore image clone.
  • Support for RBD format 2 in CEPH drivers
  • IO throttling for disk devices.
  • New hybrid drivers for Microsoft Azure and IBM Softlayer services.
  • OneGate can now be used to get information about all the VMs in a service.
  • OneFlow can wait until a VM phones home before starting the rest of VMs.
  • Network configuration in a flow can be specified per role.
  • User input on template instantiation for certain VM parameters.
  • Default view for a group in Sunstone.
  • Instantiate VMs on hold.
  • Boot order can be selected from Sunstone.

You can find more information about the new features in the release notes.

In this new release we also start supporting RHEL/CentOS 7. We encourage everyone that is using or planning to use this distributions to try the new packages and fill any bugs found in them.

We have also created new repositories for this release so its easier to install and your 4.6 installations don’t upgrade automatically to it.

You can download the packages from the software page or use the new repositories. Now is the time to try it and fill bugs so we can fix them before the final release.

This new release code name is “Lemon Slice“. From Wikipedia:

lemon_slice

The Lemon slice nebula, also known as IC 3568, is a planetary nebula that is 1.3 kiloparsecs (4500 ly) away from Earth in the constellation of Camelopardalis (just 7.5 degrees from Polaris). It is a relatively young nebula and has a core diameter of only about 0.4 light years. The Lemon slice nebula is one of the most simple nebulae known, with an almost perfectly spherical morphology. It appears very similar to a lemon, for which it is named. The core of the nebula does not have a distinctly visible structure in formation and is mostly composed of ionized helium. The central star is a very hot and bright asymptotic red giant, and can be seen as a red-orange hue in an amateur’s telescope. A faint halo of interstellar dust surrounds the nebula.

Thank you all for the input, patches and bug reports that made this release possible.

OpenNebula Carina 4.6.2 Released

We have just released a new maintenance release 4.6.2. This time it
does not come with new features, just bug fixes.

One of them is a security vulnerability in Sunstone and you should
upgrade your installation. Thanks to Dennis Felsch and Mario
Hei­de­rich from Horst Görtz Institute for IT-Security,
Ruhr-University Bochum for telling us about it.

Other fixes are as follows:

In case you have 4.6.1 installed the upgrade is straightforward as the
config files have not changed.

As always, make sure you read the upgrade guide before applying the new release.

Automatic configuration of VMs with Puppet

OpenNebula contextualization is a system that writes VM configuration parameters into a CDROM image and a package installed in the VMs that is able to configure the system using this data. By default comes with scripts to set the network configuration (IP, DNS), hostname, allowed ssh keys, etc. You can even easily create your own version of the packages with new scripts that configure other parts of the system as stated in the documentation. Still, if you don’t want to create you own context packages you can specify scripts to be started at boot time. In this post we will provide an example on how to use this system to prepare the machine to be configured with Puppet but these tips are useful for any other CMS.

The requisites for this example are:

  • An already installed Puppet master in a network reachable by your VMs
  • CentOS 6.x base image with context package >= 4.4 and internet connection

To make the VM be configured as soon as the Puppet agent is started you can change /etc/puppet/puppet.conf in the Puppet master machine and set autosign = true in main section (remember to restart the daemon). This way you wont need to sign the certificates manually:

[main]
autosign = true

In case you are not using autosign you should use the puppet cert command to sign new host certificates and wait until the Puppet agent in those nodes wakes up again. By default they do it every 30 minutes.

The installation and configuration of Puppet agent in the nodes can be done with the aforementioned init scripts. We can add this script to the files datastore. I’ve called it puppet_centos:

#!/bin/bash

PUPPET_MASTER_NAME=puppet.opennebula.org
PUPPET_MASTER_IP=10.0.0.2

if [ -z "$NODE_NAME" ]; then
    NODE_NAME=$(hostname)
fi

# Add node to /etc/hosts
echo "$ETH0_IP    $NODE_NAME" >> /etc/hosts

# Add puppet server to /etc/hosts
echo "$PUPPET_MASTER_IP    $PUPPET_MASTER_NAME" >> /etc/hosts

# Install puppetlabs repo (for latest packages)
rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm

# Install puppet agent package
yum install -y puppet

cat << EOF > /etc/puppet/puppet.conf
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
daemon          = false
ca_server       = $PUPPET_MASTER_NAME
certname        = $NODE_NAME
environment     = production
server          = $PUPPET_MASTER_NAME
EOF

# Enable puppet agent
puppet resource service puppet ensure=running enable=true

Make sure you change Puppet master IP and name.

Now in the template for the new VM you will have to add some bits in the context section:

  • puppet_centos script in files (FILES_DS) section
  • set the “init scripts” value to puppet_centos
    puppet-context-files
  • add a new variable called NODE_NAME set to $NAME-$VMID. This way the node name for the VM will be the same as the OpenNebula VM name.
    puppet-context-custom-vars

If you are using the command line the context section will be something similar to this:

CONTEXT=[
  FILES_DS="$FILE[IMAGE=puppet_centos]",
  INIT_SCRIPTS="puppet_centos",
  NETWORK="YES",
  NODE_NAME="$NAME-$VMID",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]

Now we have most of the bits needed to do the automatic configuration of the VMs after boot. It is only needed to add configuration to the nodes.

Since we are working with Virtual Machines we won’t know beforehand the name/IP of the new VMs that we can refer to when selecting the role of each one. To overcome this limitation, and taking advantage of OpenNebula name generation, we can define the node names in Puppet master with regular expressions so we can tell the roll of these VMs. For example, in /etc/puppet/manifests/site.pp we can define this node:

node /^www-\d+/ {
    include apache
}

Now when instantiating the template we can provide the name www. OpenNebula will add the VM ID to the certname so we will have www-15, www-16 and www-17, for example. All these node names will match the regular expression and install apache.

puppet-instantiate

In case you are using the command line you can use this line, changing centos_template by the name or ID of your template and 3 by the number of VMs you want to instantiate:

$ onetemplate instantiate centos_template -m 3 --name www

Native GlusterFS Image Access for KVM Drivers

GlusterFS is a distributed filesystem with replica and storage distribution features that come really handy for virtualization. This storage can be mounted as a filesystem using NFS or FUSE adapter for GlusterFS and is used as any other shared filesystem. This way of using it is very convenient as it works the same way as other filesystems, still it has the overhead of NFS or FUSE.

The good news is that for some time now qemu and libvirt have native support for GlusterFS. This makes possible for VMs running from images stored in Gluster to talk directly with its servers making the IO much faster.

The integration was made to be as similar as possible to the shared drivers (in fact uses shared tm and fs datastore drivers). Datastore management like image registration or cloning still use FUSE mounted filesystem so OpenNebula administrators will feel at home with it.

GlusterFS Arch

This feature is headed for 4.6 and is already in the git repository and the documentation. Basically the configuration to use this integration is as follows.

  • Configure the server to allow non root user access to Gluster. Add this line to ‘/etc/glusterfs/glusterd.vol’:

    option rpc-auth-allow-insecure on

    And execute this command:

    # gluster volume set <volume> server.allow-insecure on

  • Set the ownership of the files to ‘oneadmin’:

    # gluster volume set <volume> storage.owner-uid=<oneadmin uid>
    # gluster volume set <volume> storage.owner-gid=<oneadmin gid>

  • Mount GlusterFS using FUSE at some point in your frontend:

    # mkdir -p /gluster
    # chown oneadmin:oneadmin /gluster
    # mount -t glusterfs <server>:/<volume> /gluster

  • Create shared datastores for images and system and add these extra parameters to the images datastore:

    DISK_TYPE = GLUSTER
    GLUSTER_HOST = <gluster_server>:24007
    GLUSTER_VOLUME = <volume>
    CLONE_TARGET="SYSTEM"
    LN_TARGET="NONE"

  • Link the system and images datastore directories to the GlusterFS mount point:

    $ ln -s /gluster /var/lib/one/datastores/100
    $ ln -s /gluster /var/lib/one/datastores/101

Now when you start a new VM you can check that in the deployment file it points to the server configured in the datastore. Another nice feature is that storage will fall back to a secondary server in case one of them crashes. The information about replicas is automatically gathered, there is no need to add more than one host (and is currently not supported in libvirt).

If you are interested in this feature it’s a good time to download and compile master branch to test this feature. There is still some time until a release candidate of 4.6 comes out but we’d love to have some feedback as soon as possible to fix any problems that it may have.

We want to thank Netways people for helping us with this integration and testing of the qemu/gluster interface, and to John Mark from the Gluster team for his technical assistance.

First Add-ons in OpenNebula

The new OpenNebula Add-ons initiative has received an enthusiastic response from the community, ranging from individual developers and research centers to corporations. In only two days we already have three Add-ons ready to download:

2 Add-ons being now created:

and several others, like LXC support, under discussion.

You can contribute code to any of these add-ons, make a new add-on, or join the discussions in the development mailing list.

We are looking forward to your participation!

OpenNebulaConf Hacking Session

The first day of the conference we are going to have a couple of activities that I’m sure you’ll be interested in. There is one tutorial for people that wants to learn how to deploy and use OpenNebula and in parallel we will have free form hacking session.

This hacking session is meant for people that already has OpenNebula deployed and knows how to use it. There you can catch up with OpenNebula developers and have conversations that are a bit hard to have in the mailing list. It is also a great place to meet other people that may be doing things similar to you or have already sorted out some of the problems you may have. Here are some ideas on what you can do in the hacking session:

  • Ask about some new feature that is coming in new releases
  • Get help modifying Sunstone interface for your company
  • Integrate your billing system with OpenNebula accounting
  • Create a new Transfer Manager driver that knows how to talk to your SAN
  • Migrate a driver you’ve made for an old OpenNebula version to the newest one
  • Optimize your OpenNebula deployment

But you can also help us with the project! For example:

  • Discuss about some feature you may want to have included
  • Help improving or developing a new feature for OpenNebula
  • Give advice or add new documentation
  • Bug hunting!

This session will be held the first day (September 24) from 2pm to 6pm but we will be available the whole conference. In case there’s no time the first day or you want to talk to us any other day just come and say hi!

See you in Berlin!

New OpenNebula Package Repositories

Until now the way to install the latest version of OpenNebula was downloading the packages from our web page and installing them manually or compiling the sources. We have created package repositories for CentOS, Ubuntu, Debian and openSUSE to make the installation and upgrade even easier. Latest contextualization packages for CentOS/RedHat, Ubuntu and Debian are also located in those repositories.

The instructions to add the repositories and install the frontend are as follows, execute as root.

CentOS 6.4

# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://opennebula.org/repo/CentOS/6/stable/$basearch
enabled=1
gpgcheck=0
EOT
# yum install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow

Ubuntu and Debian

Change DISTRIBUTION to Ubuntu/12.04, Ubuntu/13.04 or Debian/7

# wget http://opennebula.org/repo/Debian/repo.key
# apt-key add repo.key
# echo "deb http://opennebula.org/repo/DISTRIBUTION stable opennebula" > /etc/apt/sources.list.d/opennebula.list
# apt-get update
# apt-get install opennebula opennebula-flow opennebula-gate opennebula-tools opennebula-sunstone

openSUSE

# zypper ar -f -n packman http://packman.inode.at/suse/openSUSE_12.3 packman
# zypper addrepo --no-gpgcheck --refresh -t YUM http://opennebula.org/repo/openSUSE/12.3/stable/x86_64 opennebula
# zypper refresh
# zypper install opennebula opennebula-zones opennebula-sunstone

OpenNebula and Foreman integration

Our team is in the process of rearchitecting the test and develop infrastructure and we needed a way to easily install new OSs. This installation will be done in both physical nodes and Virtual Machines. To do this we selected The Foreman as the installation server.

For physical nodes we use the standard foreman workflow where we add a new host, select its OS and install the OS. For virtual machines we wanted it to be a bit more flexible and control it from OpenNebula itself. The idea is to configure the different operating systems in foreman and let our developers select the OS that was going to be installed in the machine.

To do this we have a hook that is able to communicate with foreman and register new hosts in foreman when a VM with certain parameters is created. The parameters that we can add in the template are these ones:

  • FOREMAN_OS_ID: Operating System identifier in foreman
  • FOREMAN_SUBNET: Network where the VM is going to start

The subnet is provided as we have two networks in our infrastructure. The hook will only run when FOREMAN_OS_ID is found in the template.

This is the hook we have added to OpenNebula. Bear in mind that this is a work in progress and we want to make it more straight forward for the user, like selecting the OS by its name and not a number.

[code language="ruby"]
#!/usr/bin/env ruby

# Add OpenNebula ruby library path. Alternatively you can install OpenNebula
# ruby gem
$: << '/usr/lib/one/ruby'

require 'rubygems'
require 'foreman_api'
require 'opennebula'
require 'base64'
require 'rexml/document'

# Parameters received by the hook from OpenNebula
ID=ARGV[0]
TEMPLATE_ENCODED=ARGV[1]

# Log file for script debugging
LOG=File.open('/tmp/hook.log', 'w+')

# Change your credentials and endpoint here
CREDENTIALS={
:base_url => 'http://foreman',
:username => 'admin',
:password => 'amazingly_strong_password'
}

# In our infrastructure we have two network, here are the IDs of these networks
# in foreman
SUBNETS={
'building' => 1,
'internal' => 2
}

# There are some values hardcoded for the VMs as we don't use many different
# parameters but these can also be changed
def create_foreman_host(params ={})
host = ForemanApi::Resources::Host.new(CREDENTIALS)

description={
"host" => {
:name => params[:name],
:mac => params[:mac],
:ip => params[:ip],
:architecture_id => 1, # x86_64
:environment_id => 1, # production
:domain_id => 1, # local
:subnet_id => params[:subnet_id],
:operatingsystem_id => params[:os_id].to_i,
:puppet_proxy_id => 1, # Only one proxy
:hostgroup_id => 1, # We only have one hostgroup
:build => 1, # Enable VM building
:ptable_id => params[:ptable_id],
:medium_id => params[:medium_id]
}
}

host.create(description)
end

def get_foreman_os(id)
os = ForemanApi::Resources::OperatingSystem.new(CREDENTIALS)
res = os.index

res[0].select {|o| o["operatingsystem"]["id"]==id }[0]["operatingsystem"]
end

@client=OpenNebula::Client.new

template_decoded=Base64.decode64(TEMPLATE_ENCODED)
xml=Nokogiri::XML(template_decoded)

vm=OpenNebula::VirtualMachine.new(xml, @client)

LOG.puts vm.inspect

os_id=vm['VM/USER_TEMPLATE/FOREMAN_OS_ID']
subnet_name=vm['VM/USER_TEMPLATE/FOREMAN_SUBNET']

# We only execute the hook when FOREMAN_OS_ID is set in the VM template
exit(0) if !os_id

os=get_foreman_os(os_id.to_i)

# We need to fill medium and ptable values from OS parameters as Foreman uses
# the values from the hostgroup
medium=os['media'][0]['medium']['id']
ptable=os['ptables'][0]['ptable']['id']

subnet=1

subnet=SUBNETS[subnet_name] if SUBNETS[subnet_name]

# Fill VM parameters
info={
:name => vm['VM/NAME'],
:ip => vm['VM/TEMPLATE/NIC/IP'],
:mac => vm['VM/TEMPLATE/NIC/MAC'],
:subnet_id => subnet,
:os_id => os_id,
:medium_id => medium,
:ptable_id => ptable
}

LOG.puts create_foreman_host(info).inspect

# Chill out a bit an let foreman do its job
sleep 5

vm=OpenNebula::VirtualMachine.new(
OpenNebula::VirtualMachine.build_xml(ID), @client)

# Release the VM hold so it can start
LOG.puts vm.release.inspect
[/code]

This hook requires foreman_api gem. Now we add this hook to OpenNebula configuration with this stanza:

[code language="bash"]
VM_HOOK = [
name = "foreman-create",
on = "CREATE",
command = "/var/lib/one/foreman_create_hook.rb",
arguments = "$ID $TEMPLATE" ]
[/code]

Now to create new VMs we have created an empty qcow2 image that will be used as the disk for new VMs. Making them qcow2 will let us clone them very fast and will be much smaller. We also have a template for all the VMs, something like this:

[code language="bash"]
OS=[
ARCH="x86_64",
BOOT="network" ]

CPU="1"
MEMORY="768"

DISK=[
IMAGE="empty_10gb_disk" ]
NIC=[
NETWORK="building" ]

GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="vnc" ]

FOREMAN_OS_ID="2" # in our case this is a Ubuntu 12.10
FOREMAN_SUBNET="building"
[/code]

The VM should be launched on hold so we have time to add the host to foreman and configuring DHCP and TFTP servers. At this time we can only do this using the CLI:

$ onetemplate instantiate foreman-base --hold

We can also change the OS to be installed without changing the template

$ onetemplate instantiate foreman-base --hold --raw FOREMAN_OS_ID=1

After the VM is created the hook kicks in, adds the new host to foreman and releases the VM from hold so it can start and be installed. When the installation procedure is finished we can start using the VM or capture it so we can use as a base for other VMs. To do this we can use a disk snapshot (not hot) and shutdown the machine to save the new image.

Things to take into account:

  • Add installation of OpenNebula contextualization packages in the Foreman templates so the images are ready to be used in OpenNebula
  • Configure puppet, chef or other CMS so the images can serve as basis for your app deployments

Features we want to add to the integration:

  • Select OS by name, not id
  • Select the subnet from the OpenNebula network so it does not need to be specified
  • Automatically hold the VM on startup so Sunstone can be used to install new VMs
  • New hook to delete the host from foreman after it is deleted in OpenNebula

You can find the code from this post in this gist.

Start Your New OpenNebula User Group!

The OpenNebula Project is happy to announce the support for the creation and operation of OpenNebula User Groups. An OpenNebula User Group is a gathering of our users in a local area to share best practices, discuss technical questions, network, and learn from each other.

If you are a passionate OpenNebula user and are interested in starting your own OpenNebula User Group, join our Community Discuss mailing list and let us know about your plans.

There is more information in the new User Groups section of our site.

We look forward to your User Group proposal!

Command Line Tweaks for OpenNebula 4.0

In the last post we’ve seen the beautiful new face of Sunstone. Even if we are putting lots of effort in the web interface we are also giving some love to the command line interface.

Until now the creation of images and templates from the command line consisted on creating a template file and feeding it to oneimage/onetemplate create command. There still exist that possibility but we can create simple images or VM templates using just command parameters.

For example, registering an image can be done with this command:

$ oneimage create -d default --name ttylinux \
--path http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000003/download
ID: 4

You can also pass a file to the path, but take into account that you need to configure the datastores SAFE_DIRS parameter to make it work.

We can also create image from scratch, for example, a raw image of 512 Mb that will be connected using virtio:

$ oneimage create --name scratch --prefix vd --type datablock --fstype raw \
--size 512m -d default
ID: 5

You can get more information on the parameters issuing oneimage create --help.

Creation of VM templates is also very similar. For example, creating a VM that uses both disks and a network, adding contextualization options and enabling VNC:

$ onetemplate create --name my_vm --cpu 4 --vcpu 4 --memory 16g \
--disk ttylinux,scratch --network network --net_context --vnc
ID: 1
$ onetemplate instantiate my_vm
VM ID: 10

The output of onevm show was also changed to show disks and nics in an easier to read fashion:

$ onevm show 10
VIRTUAL MACHINE 10 INFORMATION
ID : 10
NAME : my_vm-10

[...]

VM DISKS
 ID TARGET IMAGE                               TYPE SAVE SAVE_AS
  0    hda ttylinux                            file   NO       -
  1    vda scratch                             file   NO       -

VM NICS
 ID NETWORK                                IP               MAC VLAN BRIDGE
  0 network                       192.168.0.8 02:00:c0:a8:00:08   no vbr0

VIRTUAL MACHINE TEMPLATE
CONTEXT=[
  DISK_ID="2",
  ETH0_DNS="192.168.0.1",
  ETH0_GATEWAY="192.168.0.1",
  ETH0_IP="192.168.0.8",
  ETH0_MASK="255.255.255.0",
  TARGET="hdb" ]
CPU="4"
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="5910",
  TYPE="vnc" ]
MEMORY="16384"
TEMPLATE_ID="1"
VCPU="4"
VMID="10"

This way you can get useful information about the VM in just a glimpse. If you need more information you can still use -x option or the new --all, this will print all the information in the template a the previous versions.

oneimage show was also changed so you can check which VMs are using an image:

$ oneimage show scratch
IMAGE 5 INFORMATION
ID : 5
NAME : scratch

[...]

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
10 oneadmin oneadmin my_vm-10        pend    0      0K              0d 00h03
[/sourcecode]

This is also true for onevnet show:

[sourcecode language="text" gutter="false"][/sourcecode]
$ onevnet show network
VIRTUAL NETWORK 0 INFORMATION
ID : 0
NAME : network

[...]

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
 9 oneadmin oneadmin template1       pend    0      0K              0d 00h30
10 oneadmin oneadmin my_vm-10        pend    0      0K              0d 00h04

Another nice parameter is --dry. This parameter can be used with onetemplate and oneimage create. It will print the generated template but will not register it. It is useful when you want to create a complex template but don’t want to type it from scratch, just redirect it to a file and edit it to add some features not available from the command line.

One last thing, the parameters for onevm create are the exact same ones as onetemplate create. If you just want to create a fire and forget VM you can use onevm create the same way.

OpenNebula 4.0 will be available for testing, really soon. Until then, we will keep you updated with the new features in posts like this. You can also check the posts released in the last weeks about the Ceph integration, the new scheduling feature, and the new Sunstone.
Stay tuned!