Automatic configuration of VMs with Puppet

OpenNebula contextualization is a system that writes VM configuration parameters into a CDROM image and a package installed in the VMs that is able to configure the system using this data. By default comes with scripts to set the network configuration (IP, DNS), hostname, allowed ssh keys, etc. You can even easily create your own version of the packages with new scripts that configure other parts of the system as stated in the documentation. Still, if you don’t want to create you own context packages you can specify scripts to be started at boot time. In this post we will provide an example on how to use this system to prepare the machine to be configured with Puppet but these tips are useful for any other CMS.

The requisites for this example are:

  • An already installed Puppet master in a network reachable by your VMs
  • CentOS 6.x base image with context package >= 4.4 and internet connection

To make the VM be configured as soon as the Puppet agent is started you can change /etc/puppet/puppet.conf in the Puppet master machine and set autosign = true in main section (remember to restart the daemon). This way you wont need to sign the certificates manually:

[main]
autosign = true

In case you are not using autosign you should use the puppet cert command to sign new host certificates and wait until the Puppet agent in those nodes wakes up again. By default they do it every 30 minutes.

The installation and configuration of Puppet agent in the nodes can be done with the aforementioned init scripts. We can add this script to the files datastore. I’ve called it puppet_centos:

#!/bin/bash

PUPPET_MASTER_NAME=puppet.opennebula.org
PUPPET_MASTER_IP=10.0.0.2

if [ -z "$NODE_NAME" ]; then
    NODE_NAME=$(hostname)
fi

# Add node to /etc/hosts
echo "$ETH0_IP    $NODE_NAME" >> /etc/hosts

# Add puppet server to /etc/hosts
echo "$PUPPET_MASTER_IP    $PUPPET_MASTER_NAME" >> /etc/hosts

# Install puppetlabs repo (for latest packages)
rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm

# Install puppet agent package
yum install -y puppet

cat << EOF > /etc/puppet/puppet.conf
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
daemon          = false
ca_server       = $PUPPET_MASTER_NAME
certname        = $NODE_NAME
environment     = production
server          = $PUPPET_MASTER_NAME
EOF

# Enable puppet agent
puppet resource service puppet ensure=running enable=true

Make sure you change Puppet master IP and name.

Now in the template for the new VM you will have to add some bits in the context section:

  • puppet_centos script in files (FILES_DS) section
  • set the “init scripts” value to puppet_centos
    puppet-context-files
  • add a new variable called NODE_NAME set to $NAME-$VMID. This way the node name for the VM will be the same as the OpenNebula VM name.
    puppet-context-custom-vars

If you are using the command line the context section will be something similar to this:

CONTEXT=[
  FILES_DS="$FILE[IMAGE=puppet_centos]",
  INIT_SCRIPTS="puppet_centos",
  NETWORK="YES",
  NODE_NAME="$NAME-$VMID",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]

Now we have most of the bits needed to do the automatic configuration of the VMs after boot. It is only needed to add configuration to the nodes.

Since we are working with Virtual Machines we won’t know beforehand the name/IP of the new VMs that we can refer to when selecting the role of each one. To overcome this limitation, and taking advantage of OpenNebula name generation, we can define the node names in Puppet master with regular expressions so we can tell the roll of these VMs. For example, in /etc/puppet/manifests/site.pp we can define this node:

node /^www-\d+/ {
    include apache
}

Now when instantiating the template we can provide the name www. OpenNebula will add the VM ID to the certname so we will have www-15, www-16 and www-17, for example. All these node names will match the regular expression and install apache.

puppet-instantiate

In case you are using the command line you can use this line, changing centos_template by the name or ID of your template and 3 by the number of VMs you want to instantiate:

$ onetemplate instantiate centos_template -m 3 --name www

Experiences at CeBIT 2014

Last week we participated at CeBIT 2014. In the unlikely case you are not familiar with CeBIT, it is the world’s largest and most international computer expo (wikipedia’s words, not ours ;) ). We were demoing the latest features in OpenNebula 4.6, as well as hanging around the booth of the active and community engaged Netways, we would like to thank them for the support. We’ve also featured a talk in the Open Source Park, about the history of the OpenNebula project.

cebit


All in all, a very good experience. CeBIT is a very interesting place to meet with people who are looking for what you offer, so if you are planning to attend next year and need for an outstanding Cloud Management Platform (aka OpenNebula), see you in Hannover!

OpenNebula 4.6 Beta Released!

The OpenNebula project is proud to announce the availability of OpenNebula 4.6 Beta (Carina). This release brings many new features and stabilizes features that were introduced in previous versions.

OpenNebula 4.6 introduces important improvements in several areas. The provisioning model has been greatly simplify by supplementing user groups with resource providers. This extended model, the Virtual Data Center, offers an integrated and comprehensive framework for resource allocation and isolation.

Another important new feature has taken place in the OpenNebula core. It has undergone a minor re-design of its internal data model to allow federation of OpenNebula daemons. With OpenNebula Carina your users can access resource providers from multiple data-centers in a federated way.

With Carina the OpenNebula team has started a journey to deliver a more intuitive and simpler provisioning experience for users. Our goal is level the final user usability with the system administration and operation ones. First, the Sunstone graphical interface has been tweaked to help the user workflows. It has also been improved in order to support the new Marketplace version, which makes even easier for a user to get a virtual application up and running.

Finally, some other areas has received the attention of the OpenNebula developers, like for example a better `Gluster <gluster_ds>` support through libgfapi, improved access to large pools pagination, or optionally limit the resources exposed by a host, among many others are included in Carina.

As usual OpenNebula releases are named after a Nebula. The Carina Nebula (NGC 3372) is one of the largest nebulae the sky. It can only be seen from the southern hemisphere, in the Carina constellation.

Thanks the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

The new features for VDCs, Federations and OVA support in the Marketplace introduced in OpenNebula 4.6 were funded by Produban in the context of the Fund a Feature Program.

Relevant Links

Screen Shot 2014-03-17 at 18.26.14

OpenNebula TechDay Ede – Agenda and Speaker Line-up

The OpenNebula Project is proud to announce the final agenda and line-up of speakers for our first OpenNebula TechDay. The TechDay will be hosted by BIT.nl, internet service provider and datacenter in The Netherlands, on the 26th of March. The agenda includes a hands-on cloud installation and operation workshop, presentations from OpenNebula community members and users, and an open space to discuss passionate questions, burning ideas, features, integrations…

9:00-13:00: Hands-on Workshop
Jaime Melis, Engineer at OpenNebula and C12G Labs

13:00-14:00: Lunch

14:00 – 14:30: Introduction to the TechDays
Rubén S. Montero, Chief Architect at OpenNebula and C12G Labs

14:30 – 15:00: New Features in OpenNebula
Jaime Melis, Engineer at OpenNebula and C12G Labs

15:00 – 15:30: OpenNebula Experiences @ BIT.nl
Stefan Kooman, BIT.nl

15:30 – 15:45: Coffee Break

15:45 – 16:15: OpenNebula Experiences @ SURFsara
Ander Astudillo, Consultant / Scientific developer at SURFsara

16:15 – 16:45: OpenNebula from the SysAdmin Perspective
Toshaan Barvhani, VanTosh

16:45 – 17:30: Open Space

There are still some seats available, register now! Looking forward to meeting you in Ede.

Why We Use OpenNebula at BIT

bit_heerma_brug_small

 

BIT is a business to business internet service provider in the Netherlands specialized in colocation and managed hosting. BIT delivers to quality aware customers the backbone of their IT and internet infrastructure. Reliability is the focus of BIT’s services. BIT differentiates through its knowledge, years of experience and pragmatic solutions. It helps that all people of BIT share a passion for technology.

We wanted to have a Cloud Management Platform (CMP) that would be easy to manage and would not cost too much of our resources to keep up-to-date. It should also be easy to in-corporate in our infrastructure, be flexible and easy to adjust to our needs. As we’re an ISP operating our own infrastructure we were looking for software that was able to build a “virtual DataCenter”, more functionality then “just” be able to provision a bunch of resources. We have done internet research on some main CMP’s: OpenStack, Eucalyptus, oVirt, OpenNebula. Two of them were tested in a lab environment: OpenStack and OpenNebula. We had lots of trouble getting OpenStack working, hit some bugs, etc. In the end we could never get it to do what we wanted. It became clear the project is moving fast, at least code was flying around, subprojects became different entities of themselves, etc. We were worried it would take a lot of time to get it all running, let alone upgrade to newer versions.

OpenNebula worked pretty much out of the box, besides a bug with OpenvSwitch that stood in the way at first. We wanted to have a platform that would be able to work with different hypervisors. We’re using KVM now, but for one reason or the other VMware, XEN, or Hyper-V should be possible. We didn’t want to restrict ourselves to only one (that’s why oVirt didn’t make it). Besides that it should be easy to understand how things are tied together, basically KISS. If systems get overly complex sooner or later you get bitten by them. You don’t have complete oversight of every little component and when the shit hits the fan you don’t know were to start cleaning …

OpenNebula core itself might be pretty complex but most of the work is being done by drivers. Drivers that are most of the time easy to understand shell script doing stuff. Using commands that sysadmins are already familiar with, and therefore aren’t scary, and easy to debug. OpenNebula has quite a bit of interfaces. A nice WebGUI always helps to get familiar with a project. If you can just “click” something together that actually works it’s pretty impressive. But the OCCI interface and XML-API are really useful to enable integration with our workflow and administrative systems, especially with the nifty “hooks” feature.

DSC_3649 copy


OSS to us is more that just “free to use” software, although the liberal license makes it easy to just “start using” without the need to worry about all kind of licensing issues. It gives you the possibility to, if needed, make your own adjustments and fit your use case. OpenNebula is flexible enough to extend without the need for “hacking the source”. Although it’s possible it’s (most of the time) not needed, which is a big plus because it makes following “upstream” easy. But OSS by itself is not enough for a project to be successful and succeed. It’s about the way the software is developed that is of vital importance.  The OpenNebula way of developing software is open, and user focused, The “voice” of the community really matters.  The cliche “software made with and for the community” really applies here. If users get (positive) feedback about their input they feel appreciated and be more “connected” to the project. The “atmosphere” on the mailing list is friendly and open. No flame wars, or negativism here, so it keeps users “in” instead of pushing them away.

DSC_7056

 

In a nutshell, the benefits of using OpenNebula are:

  • Simple but powerful / flexible
  • Works out of the box
  • Easy to maintain / upgrade
  • (API) Interface(s)
  • OSS
  • Great community / development organization (that became obvious as soon as we joined the mailing list)

And the benefits of using OSS:

  • Source available (i.e. able to audit code, adjust code, etc)
  • Be able to influence the (feature) roadmap by joining the community
  • Quicker development (more potential developers)
  • OSS models most of the time have an easy way to communicate with developers. With (big) commercial organizations this is often not possible or very difficult. It’s all about technical excellence, not about profit.

OpenNebula at Cloud Expo Europe 2014

A few days ago we were at the Cloud Expo Europe 2014 event in London. As part of the Open Cloud Forum sessions about open source cloud solutions, there was an OpenNebula tutorial.

Now, this is a hands-on tutorial where attendees are supposed to follow the slides and build their own small OpenNebula installation in a virtual environment, and the people that showed up were not really interested in replicating the tutorial in their laptops… But after the initial let-down, it turns out this was a very engaged audience that showed a great interest! Because the introduction and basic configuration tutorial was done fairly quickly, we had time to continue with a question & answer session that lasted more than the tutorial itself.

Bhe1npzIIAElCkQ

There were some common questions we get from time to time:

“It looks far better that I expected for what I thought was a research-only project”. Well, OpenNebula is a solid product, and it has been ready to be used in production for quite some time. Take a look at the featured users page.

“But what if I need a level of support that an open source community cannot guarantee?” Good news! C12G Labs, the company behind OpenNebula, has you covered. The best thing is that the commercial support is offered for the same open source packages available to anyone.

“Is the VMware support on par with the other hypervisors?” Absolutely! All the features are supported. You can even use a heterogeneous environment with the VMware hosts grouped into a cluster, working alongside a KVM or Xen cluster.

We also had time to talk about advanced OpenNebula features. Our documentation is quite big and reading all of it is definitely not appealing, but if you are starting with OpenNebula I recommend you to at least skim through all the sections. You may find out that you have several storage options, that OpenNebula can manage groups of VMs and has auto scaling features, or that VM guests can report back to ONE.

People were also very interested in the customization capabilities of OpenNebula. Besides the powerful driver mechanism that allows administrators to tailor the exact behaviour of OpenNebula, you can also customize the way it looks. The CLI output can be tweaked in the etc configuration files, and Sunstone can adjusted down to which buttons are shown with the Sunstone Views.

Thanks to the engaged audience for their great interest and their feedback. See you next year!

Native GlusterFS Image Access for KVM Drivers

GlusterFS is a distributed filesystem with replica and storage distribution features that come really handy for virtualization. This storage can be mounted as a filesystem using NFS or FUSE adapter for GlusterFS and is used as any other shared filesystem. This way of using it is very convenient as it works the same way as other filesystems, still it has the overhead of NFS or FUSE.

The good news is that for some time now qemu and libvirt have native support for GlusterFS. This makes possible for VMs running from images stored in Gluster to talk directly with its servers making the IO much faster.

The integration was made to be as similar as possible to the shared drivers (in fact uses shared tm and fs datastore drivers). Datastore management like image registration or cloning still use FUSE mounted filesystem so OpenNebula administrators will feel at home with it.

GlusterFS Arch

This feature is headed for 4.6 and is already in the git repository and the documentation. Basically the configuration to use this integration is as follows.

  • Configure the server to allow non root user access to Gluster. Add this line to ‘/etc/glusterfs/glusterd.vol’:

    option rpc-auth-allow-insecure on

    And execute this command:

    # gluster volume set <volume> server.allow-insecure on

  • Set the ownership of the files to ‘oneadmin’:

    # gluster volume set <volume> storage.owner-uid=<oneadmin uid>
    # gluster volume set <volume> storage.owner-gid=<oneadmin gid>

  • Mount GlusterFS using FUSE at some point in your frontend:

    # mkdir -p /gluster
    # chown oneadmin:oneadmin /gluster
    # mount -t gluster <server>:/<volume> /gluster

  • Create shared datastores for images and system and add these extra parameters to the images datastore:

    DISK_TYPE = GLUSTER
    GLUSTER_HOST = <gluster_server>:24007
    GLUSTER_VOLUME = <volume>
    CLONE_TARGET="SYSTEM"
    LN_TARGET="NONE"

  • Link the system and images datastore directories to the GlusterFS mount point:

    $ ln -s /gluster /var/lib/one/datastores/100
    $ ln -s /gluster /var/lib/one/datastores/101

Now when you start a new VM you can check that in the deployment file it points to the server configured in the datastore. Another nice feature is that storage will fall back to a secondary server in case one of them crashes. The information about replicas is automatically gathered, there is no need to add more than one host (and is currently not supported in libvirt).

If you are interested in this feature it’s a good time to download and compile master branch to test this feature. There is still some time until a release candidate of 4.6 comes out but we’d love to have some feedback as soon as possible to fix any problems that it may have.

We want to thank Netways people for helping us with this integration and testing of the qemu/gluster interface, and to John Mark from the Gluster team for his technical assistance.

Balance between User Base and Community in OpenStack and OpenNebula

In our last post “OpenNebula vs. OpenStack: User Needs vs. Vendor Driven” we stated that “OpenStack penetration in the market is relatively small compared with the investment made by vendors and VCs”. We have received several emails from people asking for the numbers that support this statement. This conclusion arises from the comparison between OpenNebula and OpenStack user base, a well as between the resources invested in development and marketing by each of them.

User Base

OpenStack is experiencing explosive growth in the number of developers, with more than 200 companies contributing code, 15,000 people and 850 companies involved according to its web site, and almost 1,000 developers involved in its latest release. However, the number of users and the size of the deployments are not that impressive, at least compared with this software development force.

Let us compare the user base of OpenNebula and OpenStack by using their latest surveys:

  • According to the most recent OpenStack user survey (November 2013), they received 827 responses, and 387 were deployments. In the 80% of these deployments the number of nodes was below 100, and only 11 deployments with more than 1,000 nodes (hypervisors).
  • On the other hand, in the latest OpenNebula survey (November 2012), OpenNebula received 2,500 responses, 820 of these were deployments. In the 70% of these deployments the number of nodes was below 100 nodes, and 99 deployments have more than 500 nodes (hypervisors).

nodes

We avoid giving references to featured users, both projects could put on the table good references of large-scale cloud deployments. The surveys show that OpenNebula and OpenStack are achieving a similar level of deployment. However, OpenStack presents a ratio 1/40 between deployments in the survey and number of people involved, a ratio 1/3 between deployments and developers, and a ratio 1/2 between deployments and companies involved. Not every company contributed to the survey?.

We could also use the volume of web searches according to Google Trends to compare the impact of both projects. The ratio in the number of searchers between OpenNebula and OpenStack during the last 12 years is 1/20. This mainly reflects the successful marketing of OpenStack. OpenNebula mainly invests its resources in developing technology and serving its users, being really vendor agnostic and free of marketing.

There is also a quarterly comparative analysis of the community activity (mailing lists traffic mostly) of the four main open-source cloud management platforms: OpenStack, OpenNebula, Eucalyptus and CloudStack. The number of threads and participants in OpenStack is one order of magnitude higher than in OpenNebula. This mostly reflects a higher number of developers. Moreover, it is also worth noting that development coordination in OpenNebula is done through a redmine portal and not through a mailing list.

Resources Invested

We conservatively estimate the investment in OpenStack is approximately $300 million per year:

  • OpenStack Havana involved 950 developers almost completely hired by vendors. This is approximately $150 Million per year
  • OpenStack Foundation budget is approximately $10 Million per year
  • Marketing costs, i.e. marketing staff and external marketing programs, can be estimated in tens of millions per year
  • Just seven of the many start-ups involved in OpenStack have raised $120 million from VC. Assuming this is for 3 years. This is approximately $40 million per year
  • There are other direct costs from other many companies, there are almost 1,000 companies involved, that are also allocating resources to development, training, documentation,…, a big overhead in indirect costs, and of course opportunity costs

So $300 million per year is a good conservative estimate. We have seen other estimations above $0.5 billion per year, some reaching to $1 billion per year. In any case, over a few years, it’s billions. Will these companies ever get their money back?. I see VC’s starting to ask “Where’s our future money?”. Summarizing, a relatively small user base, and so penetration in the market, compared with the investment made by vendors and VCs. OpenNebula, with a budget at least two orders of magnitude lower, is achieving a similar user base. You can draw your own conclusions.

OpenNebula at CeBIT 2014

.

NETWAYS will be holding a booth at the world´s leading high- tech exhibition, from March 10th – 14th at CeBIT 2014, Hanover, Germany. NETWAYS is a premium partner of C12G Labs and a contributor of the OpenNebula project. Due to our broad experience in OpenNebula, interested parties are invited to visit our booth in order to find out more about the latest innovations and news, as well as demonstrations of the latest OpenNebula functionality. For those who already know they want to meet us, we highly recommend making an appointment but of course you can also just drop by at hall 6 (booth E16 319).

Moreover, all through Thursday 13th and the morning of Friday 14th, members of the OpenNebula team will be hanging around the NETWAYS booth, so please feel free to come by for more information on the project, as well as fresh news on the planned future of OpenNebula.

Aside from that, all OpenNebula fans should definitely not miss the presentation “OpenNebula: Open-Source Enterprise Cloud Simplified” held by Tino at the Open Source Forum (hall6) on Friday, March 14th.

We´ll see you in Hanover!

OpenNebula Newsletter – February 2014

We want to let you know about what we are up to with the main news from the last month regarding the OpenNebula project, including what you can expect in the following months.

Technology

As part of our commitment to solve bugs reported by the OpenNebula community, a new maintenance release for 4.4 Retina, 4.4.1  was released. This release only includes bug fixes and is a recommended update for everyone running any 3.x or 4.x version of OpenNebula, whom for any particular reason do not want to upgrade their cloud manager to the latest available OpenNebula version.

OpenNebula 4.6 is just around the corner, with the beta release just days away. The team is now deep into the testing and certification process, and finishing some wrinkles in the new features.

The Sunstone interface is getting a facelift, with various JS & CSS components being updated. Hard work, but it pays off with a modern and cleaner interface. Check out the pic!

 

The other significant features that will be present in OpenNebula 4.6 includes the ability to achieve a federation using OpenNebulas (yes, plural is coming!) at different datacenters. The replication would be performed at the DB level, sharing the users, groups, ACLs and zone pools, while the other information would be kept locally and represent local resources. Groups would be able to have Resource Providers (basically, clusters in one local or remote zone) to conform Virtual Datacenters, thus allowing a cloud partition to enable real and isolated multi-tenancy. Check out the screencast on partitioning clouds with vDCs to get a feel on this new functionality.

Another cool feature that will be present in 4.6 is the ability to import OVAs into OpenNebula. The functionality is being implemented in AppMarket, and will be a complete translation and import of all the resources defined in the OVA: disks, capacity, network… even with the ability to change the disks format.

The AppMarket component has been updated to extend its functionality to enable the management and processing of OVA files. A new component AppMarket Worker is introduced, which handles the OVA package treatment (download, unpack, OVF parsing) and image format conversion. The release also features a new API and a new AppMarket interface via Sunstone.
Other aspects that are being revisited are storage backends, virtual networking, datastore, image and VM management and Sunstone. You can find a comprehensive list here.

Some of the above features have been sponsored by Produban  in the context of the Fund a Feature program.

Community

The OpenNebula community is as vibrant and active as ever. We want to highlight the great user story told by the people behind runtastic, the popular fitness app. The story is about the migration from a huge german webhoster to private cloud run by them, with the ability to burst their peaks into a public cloud like Amazon EC2. It makes an interesting read.

There has been a lot of feedback on the development portal, suggesting new features and reporting bugs. We want to thank you all for this, we really appreciate it. This is what makes OpenNebula a great software, it won’t be the same without you!

Outreach

First things first, this year’s OpenNebula Conference will be held in Berlin as well, 2-4 of December, 2014. If you want to repeat or find out how it is for yourself, save the date!

There are two articles posted this month comparing OpenNebula with other similar projects which we think are worth reading. First one compares OpenNebula vs OpenStack, while the second extends the comparison with VMware and Ganeti.

We are setting up a number of OpenNebula TechDays around the world. These events are designed to learn about OpenNebula with a hands-on cloud installation and operation workshop, and presentations from community members and users. If you are interested in hosting or sponsoring one, let us know!.

The first TechDay is happening in Ede, Netherlands, and it is hosted by BIT, a dutch internet service provider. The call for sponsors and for speakers is not closed yet so if you are interested drop us a line. Upcoming TechDays will be held in Boca Raton, Chicago, Palo Alto (USA), Aveiro (Portugal), Barcelona (Spain), Munich (Germany), Lyon (France) and Timisoara (Romania).

Throughout february the OpenNebula team was present at the CentOS Dojo and Fosdem, with great feedback and exciting comments including a productive talk with folks from the GlusterFS project. It was present also at CloudScape VI in Brussels and the Cloud Expo Europe in London, with a thoroughly engaged audience in the configuration tutorial, showing great interest and giving excepcional feedback.

The following events are happening this month, with the participation of an OpenNebula team member:

During the following months, members of the OpenNebula team will be speaking in the following events:

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.