Yesterday the first OpenNebula TechDay took place. And it has been a wonderful experience. Seeing such an involved community, with so many great stories, such determined feedback, great conversations and a really friendly environment made us all participants feel greatly satisfied with the event.

tutorial-ede

ruben-ede

ander-ede

 

Presentations

We would like to thank all the speakers and all the attendees, we are sincerely looking forward to hear more stories from them. We would also like to send a heartfelt thank you to the host and sponsor of the event: BIT.nl, which besides being an amazing hosting company that likes to do things well (which makes sense since they are using OpenNebula), have made the event run extremly smoothly and really well organized, special thanks to Stefan Kooman and Bart Vrancken!

Looking forward to meeting you in the next editions of the OpenNebula Technology Days!

This week we gave an invited talk in the open-source cloud session at Future Internet Assembly 2014. Its aim was to show how OpenNebula is driving innovation in cloud computing, impacting the adoption of private cloud, and enabling business in the cloud.

We covered the following scenarios:

  • First, most organizations adopt cloud to optimize their IT investment, to improve existing services or to support new business and service models. In this scenario, OpenNebula lowers the barriers for new organizations to build their private cloud.
  • Second, many organizations like the fact that open source allows great customization to meet individual requirements. They can build a differentiated cloud service to meet customers needs or to offer new cloud provision models for a specific market segment or geography.
  • Third, open-source also encourages and supports innovation in the development of new cloud products. We have seen many examples of how its use lowers the barriers for new ICT players to create their own cloud offerings.

We wanted to present experiences from users, so we included some details about how OpenNebula is being used by four Europe companies. Big thanks to Armin Deliomini (Runtastic), Stefan Kooman (BIT.nl), Carlo Daffara (CloudWeabers) and Bernd Erk (Netways)!.
IMAG0432

FLOSSUK 2014The yearly spring meeting of the FLOSS UK takes place in Brighton this year. The venue in the Old Ship Hotel in Brighton is typical british and about 100 meters away from the ocean. Independent from the lovely countryside i have the chance to take about our way from manual configured XEN-Instanes to a fully fledged OpenNebula-Cloud. Using OpenNebula for years know, is a big advantage for us and helps us a lot in our daily business.

There was a lot of activity in the OpenNebula Project the last year, so i have much things to talk about. Native GlusterFS Support, improved Network and Storage Drivers are just a few examples about that. If you are in Brighton or have a chance to come over please join my talk tomorrow.

I talked to several people yesterday and many of them gave OpenNebula a shot after listening to my last years talk about it. There is nothing better i think :-)

OpenNebula contextualization is a system that writes VM configuration parameters into a CDROM image and a package installed in the VMs that is able to configure the system using this data. By default comes with scripts to set the network configuration (IP, DNS), hostname, allowed ssh keys, etc. You can even easily create your own version of the packages with new scripts that configure other parts of the system as stated in the documentation. Still, if you don’t want to create you own context packages you can specify scripts to be started at boot time. In this post we will provide an example on how to use this system to prepare the machine to be configured with Puppet but these tips are useful for any other CMS.

The requisites for this example are:

  • An already installed Puppet master in a network reachable by your VMs
  • CentOS 6.x base image with context package >= 4.4 and internet connection

To make the VM be configured as soon as the Puppet agent is started you can change /etc/puppet/puppet.conf in the Puppet master machine and set autosign = true in main section (remember to restart the daemon). This way you wont need to sign the certificates manually:

[main]
autosign = true

In case you are not using autosign you should use the puppet cert command to sign new host certificates and wait until the Puppet agent in those nodes wakes up again. By default they do it every 30 minutes.

The installation and configuration of Puppet agent in the nodes can be done with the aforementioned init scripts. We can add this script to the files datastore. I’ve called it puppet_centos:

#!/bin/bash

PUPPET_MASTER_NAME=puppet.opennebula.org
PUPPET_MASTER_IP=10.0.0.2

if [ -z "$NODE_NAME" ]; then
    NODE_NAME=$(hostname)
fi

# Add node to /etc/hosts
echo "$ETH0_IP    $NODE_NAME" >> /etc/hosts

# Add puppet server to /etc/hosts
echo "$PUPPET_MASTER_IP    $PUPPET_MASTER_NAME" >> /etc/hosts

# Install puppetlabs repo (for latest packages)
rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm

# Install puppet agent package
yum install -y puppet

cat << EOF > /etc/puppet/puppet.conf
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
daemon          = false
ca_server       = $PUPPET_MASTER_NAME
certname        = $NODE_NAME
environment     = production
server          = $PUPPET_MASTER_NAME
EOF

# Enable puppet agent
puppet resource service puppet ensure=running enable=true

Make sure you change Puppet master IP and name.

Now in the template for the new VM you will have to add some bits in the context section:

  • puppet_centos script in files (FILES_DS) section
  • set the “init scripts” value to puppet_centos
    puppet-context-files
  • add a new variable called NODE_NAME set to $NAME-$VMID. This way the node name for the VM will be the same as the OpenNebula VM name.
    puppet-context-custom-vars

If you are using the command line the context section will be something similar to this:

CONTEXT=[
  FILES_DS="$FILE[IMAGE=puppet_centos]",
  INIT_SCRIPTS="puppet_centos",
  NETWORK="YES",
  NODE_NAME="$NAME-$VMID",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]

Now we have most of the bits needed to do the automatic configuration of the VMs after boot. It is only needed to add configuration to the nodes.

Since we are working with Virtual Machines we won’t know beforehand the name/IP of the new VMs that we can refer to when selecting the role of each one. To overcome this limitation, and taking advantage of OpenNebula name generation, we can define the node names in Puppet master with regular expressions so we can tell the roll of these VMs. For example, in /etc/puppet/manifests/site.pp we can define this node:

node /^www-\d+/ {
    include apache
}

Now when instantiating the template we can provide the name www. OpenNebula will add the VM ID to the certname so we will have www-15, www-16 and www-17, for example. All these node names will match the regular expression and install apache.

puppet-instantiate

In case you are using the command line you can use this line, changing centos_template by the name or ID of your template and 3 by the number of VMs you want to instantiate:

$ onetemplate instantiate centos_template -m 3 --name www

Last week we participated at CeBIT 2014. In the unlikely case you are not familiar with CeBIT, it is the world’s largest and most international computer expo (wikipedia’s words, not ours ;) ). We were demoing the latest features in OpenNebula 4.6, as well as hanging around the booth of the active and community engaged Netways, we would like to thank them for the support. We’ve also featured a talk in the Open Source Park, about the history of the OpenNebula project.

cebit


All in all, a very good experience. CeBIT is a very interesting place to meet with people who are looking for what you offer, so if you are planning to attend next year and need for an outstanding Cloud Management Platform (aka OpenNebula), see you in Hannover!

The OpenNebula project is proud to announce the availability of OpenNebula 4.6 Beta (Carina). This release brings many new features and stabilizes features that were introduced in previous versions.

OpenNebula 4.6 introduces important improvements in several areas. The provisioning model has been greatly simplify by supplementing user groups with resource providers. This extended model, the Virtual Data Center, offers an integrated and comprehensive framework for resource allocation and isolation.

Another important new feature has taken place in the OpenNebula core. It has undergone a minor re-design of its internal data model to allow federation of OpenNebula daemons. With OpenNebula Carina your users can access resource providers from multiple data-centers in a federated way.

With Carina the OpenNebula team has started a journey to deliver a more intuitive and simpler provisioning experience for users. Our goal is level the final user usability with the system administration and operation ones. First, the Sunstone graphical interface has been tweaked to help the user workflows. It has also been improved in order to support the new Marketplace version, which makes even easier for a user to get a virtual application up and running.

Finally, some other areas has received the attention of the OpenNebula developers, like for example a better `Gluster <gluster_ds>` support through libgfapi, improved access to large pools pagination, or optionally limit the resources exposed by a host, among many others are included in Carina.

As usual OpenNebula releases are named after a Nebula. The Carina Nebula (NGC 3372) is one of the largest nebulae the sky. It can only be seen from the southern hemisphere, in the Carina constellation.

Thanks the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

The new features for VDCs, Federations and OVA support in the Marketplace introduced in OpenNebula 4.6 were funded by Produban in the context of the Fund a Feature Program.

Relevant Links

Screen Shot 2014-03-17 at 18.26.14

The OpenNebula Project is proud to announce the final agenda and line-up of speakers for our first OpenNebula TechDay. The TechDay will be hosted by BIT.nl, internet service provider and datacenter in The Netherlands, on the 26th of March. The agenda includes a hands-on cloud installation and operation workshop, presentations from OpenNebula community members and users, and an open space to discuss passionate questions, burning ideas, features, integrations…

9:00-13:00: Hands-on Workshop
Jaime Melis, Engineer at OpenNebula and C12G Labs

13:00-14:00: Lunch

14:00 – 14:30: Introduction to the TechDays
Rubén S. Montero, Chief Architect at OpenNebula and C12G Labs

14:30 – 15:00: New Features in OpenNebula
Jaime Melis, Engineer at OpenNebula and C12G Labs

15:00 – 15:30: OpenNebula Experiences @ BIT.nl
Stefan Kooman, BIT.nl

15:30 – 15:45: Coffee Break

15:45 – 16:15: OpenNebula Experiences @ SURFsara
Ander Astudillo, Consultant / Scientific developer at SURFsara

16:15 – 16:45: OpenNebula from the SysAdmin Perspective
Toshaan Barvhani, VanTosh

16:45 – 17:30: Open Space

There are still some seats available, register now! Looking forward to meeting you in Ede.

bit_heerma_brug_small

 

BIT is a business to business internet service provider in the Netherlands specialized in colocation and managed hosting. BIT delivers to quality aware customers the backbone of their IT and internet infrastructure. Reliability is the focus of BIT’s services. BIT differentiates through its knowledge, years of experience and pragmatic solutions. It helps that all people of BIT share a passion for technology.

We wanted to have a Cloud Management Platform (CMP) that would be easy to manage and would not cost too much of our resources to keep up-to-date. It should also be easy to in-corporate in our infrastructure, be flexible and easy to adjust to our needs. As we’re an ISP operating our own infrastructure we were looking for software that was able to build a “virtual DataCenter”, more functionality then “just” be able to provision a bunch of resources. We have done internet research on some main CMP’s: OpenStack, Eucalyptus, oVirt, OpenNebula. Two of them were tested in a lab environment: OpenStack and OpenNebula. We had lots of trouble getting OpenStack working, hit some bugs, etc. In the end we could never get it to do what we wanted. It became clear the project is moving fast, at least code was flying around, subprojects became different entities of themselves, etc. We were worried it would take a lot of time to get it all running, let alone upgrade to newer versions.

OpenNebula worked pretty much out of the box, besides a bug with OpenvSwitch that stood in the way at first. We wanted to have a platform that would be able to work with different hypervisors. We’re using KVM now, but for one reason or the other VMware, XEN, or Hyper-V should be possible. We didn’t want to restrict ourselves to only one (that’s why oVirt didn’t make it). Besides that it should be easy to understand how things are tied together, basically KISS. If systems get overly complex sooner or later you get bitten by them. You don’t have complete oversight of every little component and when the shit hits the fan you don’t know were to start cleaning …

OpenNebula core itself might be pretty complex but most of the work is being done by drivers. Drivers that are most of the time easy to understand shell script doing stuff. Using commands that sysadmins are already familiar with, and therefore aren’t scary, and easy to debug. OpenNebula has quite a bit of interfaces. A nice WebGUI always helps to get familiar with a project. If you can just “click” something together that actually works it’s pretty impressive. But the OCCI interface and XML-API are really useful to enable integration with our workflow and administrative systems, especially with the nifty “hooks” feature.

DSC_3649 copy


OSS to us is more that just “free to use” software, although the liberal license makes it easy to just “start using” without the need to worry about all kind of licensing issues. It gives you the possibility to, if needed, make your own adjustments and fit your use case. OpenNebula is flexible enough to extend without the need for “hacking the source”. Although it’s possible it’s (most of the time) not needed, which is a big plus because it makes following “upstream” easy. But OSS by itself is not enough for a project to be successful and succeed. It’s about the way the software is developed that is of vital importance.  The OpenNebula way of developing software is open, and user focused, The “voice” of the community really matters.  The cliche “software made with and for the community” really applies here. If users get (positive) feedback about their input they feel appreciated and be more “connected” to the project. The “atmosphere” on the mailing list is friendly and open. No flame wars, or negativism here, so it keeps users “in” instead of pushing them away.

DSC_7056

 

In a nutshell, the benefits of using OpenNebula are:

  • Simple but powerful / flexible
  • Works out of the box
  • Easy to maintain / upgrade
  • (API) Interface(s)
  • OSS
  • Great community / development organization (that became obvious as soon as we joined the mailing list)

And the benefits of using OSS:

  • Source available (i.e. able to audit code, adjust code, etc)
  • Be able to influence the (feature) roadmap by joining the community
  • Quicker development (more potential developers)
  • OSS models most of the time have an easy way to communicate with developers. With (big) commercial organizations this is often not possible or very difficult. It’s all about technical excellence, not about profit.

A few days ago we were at the Cloud Expo Europe 2014 event in London. As part of the Open Cloud Forum sessions about open source cloud solutions, there was an OpenNebula tutorial.

Now, this is a hands-on tutorial where attendees are supposed to follow the slides and build their own small OpenNebula installation in a virtual environment, and the people that showed up were not really interested in replicating the tutorial in their laptops… But after the initial let-down, it turns out this was a very engaged audience that showed a great interest! Because the introduction and basic configuration tutorial was done fairly quickly, we had time to continue with a question & answer session that lasted more than the tutorial itself.

Bhe1npzIIAElCkQ

There were some common questions we get from time to time:

“It looks far better that I expected for what I thought was a research-only project”. Well, OpenNebula is a solid product, and it has been ready to be used in production for quite some time. Take a look at the featured users page.

“But what if I need a level of support that an open source community cannot guarantee?” Good news! C12G Labs, the company behind OpenNebula, has you covered. The best thing is that the commercial support is offered for the same open source packages available to anyone.

“Is the VMware support on par with the other hypervisors?” Absolutely! All the features are supported. You can even use a heterogeneous environment with the VMware hosts grouped into a cluster, working alongside a KVM or Xen cluster.

We also had time to talk about advanced OpenNebula features. Our documentation is quite big and reading all of it is definitely not appealing, but if you are starting with OpenNebula I recommend you to at least skim through all the sections. You may find out that you have several storage options, that OpenNebula can manage groups of VMs and has auto scaling features, or that VM guests can report back to ONE.

People were also very interested in the customization capabilities of OpenNebula. Besides the powerful driver mechanism that allows administrators to tailor the exact behaviour of OpenNebula, you can also customize the way it looks. The CLI output can be tweaked in the etc configuration files, and Sunstone can adjusted down to which buttons are shown with the Sunstone Views.

Thanks to the engaged audience for their great interest and their feedback. See you next year!

GlusterFS is a distributed filesystem with replica and storage distribution features that come really handy for virtualization. This storage can be mounted as a filesystem using NFS or FUSE adapter for GlusterFS and is used as any other shared filesystem. This way of using it is very convenient as it works the same way as other filesystems, still it has the overhead of NFS or FUSE.

The good news is that for some time now qemu and libvirt have native support for GlusterFS. This makes possible for VMs running from images stored in Gluster to talk directly with its servers making the IO much faster.

The integration was made to be as similar as possible to the shared drivers (in fact uses shared tm and fs datastore drivers). Datastore management like image registration or cloning still use FUSE mounted filesystem so OpenNebula administrators will feel at home with it.

GlusterFS Arch

This feature is headed for 4.6 and is already in the git repository and the documentation. Basically the configuration to use this integration is as follows.

  • Configure the server to allow non root user access to Gluster. Add this line to ‘/etc/glusterfs/glusterd.vol’:

    option rpc-auth-allow-insecure on

    And execute this command:

    # gluster volume set <volume> server.allow-insecure on

  • Set the ownership of the files to ‘oneadmin’:

    # gluster volume set <volume> storage.owner-uid=<oneadmin uid>
    # gluster volume set <volume> storage.owner-gid=<oneadmin gid>

  • Mount GlusterFS using FUSE at some point in your frontend:

    # mkdir -p /gluster
    # chown oneadmin:oneadmin /gluster
    # mount -t glusterfs <server>:/<volume> /gluster

  • Create shared datastores for images and system and add these extra parameters to the images datastore:

    DISK_TYPE = GLUSTER
    GLUSTER_HOST = <gluster_server>:24007
    GLUSTER_VOLUME = <volume>
    CLONE_TARGET="SYSTEM"
    LN_TARGET="NONE"

  • Link the system and images datastore directories to the GlusterFS mount point:

    $ ln -s /gluster /var/lib/one/datastores/100
    $ ln -s /gluster /var/lib/one/datastores/101

Now when you start a new VM you can check that in the deployment file it points to the server configured in the datastore. Another nice feature is that storage will fall back to a secondary server in case one of them crashes. The information about replicas is automatically gathered, there is no need to add more than one host (and is currently not supported in libvirt).

If you are interested in this feature it’s a good time to download and compile master branch to test this feature. There is still some time until a release candidate of 4.6 comes out but we’d love to have some feedback as soon as possible to fix any problems that it may have.

We want to thank Netways people for helping us with this integration and testing of the qemu/gluster interface, and to John Mark from the Gluster team for his technical assistance.