AppMarket 2.0 RC (1.9.85)

The OpenNebula team is pleased to announce the release candidate of AppMarket 2.0.

list_appliance_from_sunstone

In addition to the features included in the previous beta release, this version features a new simplified import dialog and fixes minor bugs reported by the community. Also, AppMarket Worker has been extended to support other OVF versions.

import_appliance_from_sunstone

This new version can be downloaded from the OpenNebula download page and information on how to install and use it is available in Github.

 

oneInsight: A 2D-Load Visualization Addon for OpenNebula-Managed Hosts

I’m pleased to announce oneInsight, a visualization addon for OpenNebula that allows users to have at-a-glance, an insight of the load of managed hosts. It provides various kinds of load mappings, that currently include the following metrics:

  • CPU used by OpenNebula-managed virtual machines;
  • Memory used by managed virtual machines;
  • Effective CPU used by all system processes, including processes outside of managed virtual machines;
  • Effective memory used by all system processes.

Here is a screenshot showing an overview of CPU used.

Screenshot of oneInsight

Benefits

oneInsight enables many benefits, as well for cloud operators than for business managers:

  • Provides a simple and comprehensible load charting that allows you to have at-a-glance an accurate insight on how servers are loaded, so to let you plan migrations and capacity upgrading if necessary;
  • Provides, via tooltip and popup, details about each server in zero or one click;
  • High class visualization that saves you from command line output;
  • Lightweight HTML/Javascript stack that can be deployed on any server within your IT infrastructure, just need a valid OpenNebula user account and a network access to OpenNebula server.

How oneInsight Works

oneInsight works out-of-the-box on the vast majority of Linux operating systems, subject to have the following tools installed:

  • curl command line interface
  • The Bash interpreter
  • The cron time-based job scheduler
  • A Web server like Apache and nginx, even the python SimpleHTTPServer module just works fine

Read the documentation to get started.

What Next & Contributions

oneInsight is a new project, and there is a lot of things concerning data visualization in OpenNebula. Contributors are welcome, we apply the Github Pull Request model for contributions in code and documentation. Stay tuned.

Wrap-up of the OpenNebula TechDay in Ede, NL

Yesterday the first OpenNebula TechDay took place. And it has been a wonderful experience. Seeing such an involved community, with so many great stories, such determined feedback, great conversations and a really friendly environment made us all participants feel greatly satisfied with the event.

tutorial-ede

ruben-ede

ander-ede

 

Presentations

We would like to thank all the speakers and all the attendees, we are sincerely looking forward to hear more stories from them. We would also like to send a heartfelt thank you to the host and sponsor of the event: BIT.nl, which besides being an amazing hosting company that likes to do things well (which makes sense since they are using OpenNebula), have made the event run extremly smoothly and really well organized, special thanks to Stefan Kooman and Bart Vrancken!

Looking forward to meeting you in the next editions of the OpenNebula Technology Days!

Automatic configuration of VMs with Puppet

OpenNebula contextualization is a system that writes VM configuration parameters into a CDROM image and a package installed in the VMs that is able to configure the system using this data. By default comes with scripts to set the network configuration (IP, DNS), hostname, allowed ssh keys, etc. You can even easily create your own version of the packages with new scripts that configure other parts of the system as stated in the documentation. Still, if you don’t want to create you own context packages you can specify scripts to be started at boot time. In this post we will provide an example on how to use this system to prepare the machine to be configured with Puppet but these tips are useful for any other CMS.

The requisites for this example are:

  • An already installed Puppet master in a network reachable by your VMs
  • CentOS 6.x base image with context package >= 4.4 and internet connection

To make the VM be configured as soon as the Puppet agent is started you can change /etc/puppet/puppet.conf in the Puppet master machine and set autosign = true in main section (remember to restart the daemon). This way you wont need to sign the certificates manually:

[main]
autosign = true

In case you are not using autosign you should use the puppet cert command to sign new host certificates and wait until the Puppet agent in those nodes wakes up again. By default they do it every 30 minutes.

The installation and configuration of Puppet agent in the nodes can be done with the aforementioned init scripts. We can add this script to the files datastore. I’ve called it puppet_centos:

#!/bin/bash

PUPPET_MASTER_NAME=puppet.opennebula.org
PUPPET_MASTER_IP=10.0.0.2

if [ -z "$NODE_NAME" ]; then
    NODE_NAME=$(hostname)
fi

# Add node to /etc/hosts
echo "$ETH0_IP    $NODE_NAME" >> /etc/hosts

# Add puppet server to /etc/hosts
echo "$PUPPET_MASTER_IP    $PUPPET_MASTER_NAME" >> /etc/hosts

# Install puppetlabs repo (for latest packages)
rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm

# Install puppet agent package
yum install -y puppet

cat << EOF > /etc/puppet/puppet.conf
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
daemon          = false
ca_server       = $PUPPET_MASTER_NAME
certname        = $NODE_NAME
environment     = production
server          = $PUPPET_MASTER_NAME
EOF

# Enable puppet agent
puppet resource service puppet ensure=running enable=true

Make sure you change Puppet master IP and name.

Now in the template for the new VM you will have to add some bits in the context section:

  • puppet_centos script in files (FILES_DS) section
  • set the “init scripts” value to puppet_centos
    puppet-context-files
  • add a new variable called NODE_NAME set to $NAME-$VMID. This way the node name for the VM will be the same as the OpenNebula VM name.
    puppet-context-custom-vars

If you are using the command line the context section will be something similar to this:

CONTEXT=[
  FILES_DS="$FILE[IMAGE=puppet_centos]",
  INIT_SCRIPTS="puppet_centos",
  NETWORK="YES",
  NODE_NAME="$NAME-$VMID",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]

Now we have most of the bits needed to do the automatic configuration of the VMs after boot. It is only needed to add configuration to the nodes.

Since we are working with Virtual Machines we won’t know beforehand the name/IP of the new VMs that we can refer to when selecting the role of each one. To overcome this limitation, and taking advantage of OpenNebula name generation, we can define the node names in Puppet master with regular expressions so we can tell the roll of these VMs. For example, in /etc/puppet/manifests/site.pp we can define this node:

node /^www-\d+/ {
    include apache
}

Now when instantiating the template we can provide the name www. OpenNebula will add the VM ID to the certname so we will have www-15, www-16 and www-17, for example. All these node names will match the regular expression and install apache.

puppet-instantiate

In case you are using the command line you can use this line, changing centos_template by the name or ID of your template and 3 by the number of VMs you want to instantiate:

$ onetemplate instantiate centos_template -m 3 --name www

Experiences at CeBIT 2014

Last week we participated at CeBIT 2014. In the unlikely case you are not familiar with CeBIT, it is the world’s largest and most international computer expo (wikipedia’s words, not ours ;) ). We were demoing the latest features in OpenNebula 4.6, as well as hanging around the booth of the active and community engaged Netways, we would like to thank them for the support. We’ve also featured a talk in the Open Source Park, about the history of the OpenNebula project.

cebit


All in all, a very good experience. CeBIT is a very interesting place to meet with people who are looking for what you offer, so if you are planning to attend next year and need for an outstanding Cloud Management Platform (aka OpenNebula), see you in Hannover!

Why We Use OpenNebula at BIT

bit_heerma_brug_small

 

BIT is a business to business internet service provider in the Netherlands specialized in colocation and managed hosting. BIT delivers to quality aware customers the backbone of their IT and internet infrastructure. Reliability is the focus of BIT’s services. BIT differentiates through its knowledge, years of experience and pragmatic solutions. It helps that all people of BIT share a passion for technology.

We wanted to have a Cloud Management Platform (CMP) that would be easy to manage and would not cost too much of our resources to keep up-to-date. It should also be easy to in-corporate in our infrastructure, be flexible and easy to adjust to our needs. As we’re an ISP operating our own infrastructure we were looking for software that was able to build a “virtual DataCenter”, more functionality then “just” be able to provision a bunch of resources. We have done internet research on some main CMP’s: OpenStack, Eucalyptus, oVirt, OpenNebula. Two of them were tested in a lab environment: OpenStack and OpenNebula. We had lots of trouble getting OpenStack working, hit some bugs, etc. In the end we could never get it to do what we wanted. It became clear the project is moving fast, at least code was flying around, subprojects became different entities of themselves, etc. We were worried it would take a lot of time to get it all running, let alone upgrade to newer versions.

OpenNebula worked pretty much out of the box, besides a bug with OpenvSwitch that stood in the way at first. We wanted to have a platform that would be able to work with different hypervisors. We’re using KVM now, but for one reason or the other VMware, XEN, or Hyper-V should be possible. We didn’t want to restrict ourselves to only one (that’s why oVirt didn’t make it). Besides that it should be easy to understand how things are tied together, basically KISS. If systems get overly complex sooner or later you get bitten by them. You don’t have complete oversight of every little component and when the shit hits the fan you don’t know were to start cleaning …

OpenNebula core itself might be pretty complex but most of the work is being done by drivers. Drivers that are most of the time easy to understand shell script doing stuff. Using commands that sysadmins are already familiar with, and therefore aren’t scary, and easy to debug. OpenNebula has quite a bit of interfaces. A nice WebGUI always helps to get familiar with a project. If you can just “click” something together that actually works it’s pretty impressive. But the OCCI interface and XML-API are really useful to enable integration with our workflow and administrative systems, especially with the nifty “hooks” feature.

DSC_3649 copy


OSS to us is more that just “free to use” software, although the liberal license makes it easy to just “start using” without the need to worry about all kind of licensing issues. It gives you the possibility to, if needed, make your own adjustments and fit your use case. OpenNebula is flexible enough to extend without the need for “hacking the source”. Although it’s possible it’s (most of the time) not needed, which is a big plus because it makes following “upstream” easy. But OSS by itself is not enough for a project to be successful and succeed. It’s about the way the software is developed that is of vital importance.  The OpenNebula way of developing software is open, and user focused, The “voice” of the community really matters.  The cliche “software made with and for the community” really applies here. If users get (positive) feedback about their input they feel appreciated and be more “connected” to the project. The “atmosphere” on the mailing list is friendly and open. No flame wars, or negativism here, so it keeps users “in” instead of pushing them away.

DSC_7056

 

In a nutshell, the benefits of using OpenNebula are:

  • Simple but powerful / flexible
  • Works out of the box
  • Easy to maintain / upgrade
  • (API) Interface(s)
  • OSS
  • Great community / development organization (that became obvious as soon as we joined the mailing list)

And the benefits of using OSS:

  • Source available (i.e. able to audit code, adjust code, etc)
  • Be able to influence the (feature) roadmap by joining the community
  • Quicker development (more potential developers)
  • OSS models most of the time have an easy way to communicate with developers. With (big) commercial organizations this is often not possible or very difficult. It’s all about technical excellence, not about profit.

Native GlusterFS Image Access for KVM Drivers

GlusterFS is a distributed filesystem with replica and storage distribution features that come really handy for virtualization. This storage can be mounted as a filesystem using NFS or FUSE adapter for GlusterFS and is used as any other shared filesystem. This way of using it is very convenient as it works the same way as other filesystems, still it has the overhead of NFS or FUSE.

The good news is that for some time now qemu and libvirt have native support for GlusterFS. This makes possible for VMs running from images stored in Gluster to talk directly with its servers making the IO much faster.

The integration was made to be as similar as possible to the shared drivers (in fact uses shared tm and fs datastore drivers). Datastore management like image registration or cloning still use FUSE mounted filesystem so OpenNebula administrators will feel at home with it.

GlusterFS Arch

This feature is headed for 4.6 and is already in the git repository and the documentation. Basically the configuration to use this integration is as follows.

  • Configure the server to allow non root user access to Gluster. Add this line to ‘/etc/glusterfs/glusterd.vol’:

    option rpc-auth-allow-insecure on

    And execute this command:

    # gluster volume set <volume> server.allow-insecure on

  • Set the ownership of the files to ‘oneadmin’:

    # gluster volume set <volume> storage.owner-uid=<oneadmin uid>
    # gluster volume set <volume> storage.owner-gid=<oneadmin gid>

  • Mount GlusterFS using FUSE at some point in your frontend:

    # mkdir -p /gluster
    # chown oneadmin:oneadmin /gluster
    # mount -t gluster <server>:/<volume> /gluster

  • Create shared datastores for images and system and add these extra parameters to the images datastore:

    DISK_TYPE = GLUSTER
    GLUSTER_HOST = <gluster_server>:24007
    GLUSTER_VOLUME = <volume>
    CLONE_TARGET="SYSTEM"
    LN_TARGET="NONE"

  • Link the system and images datastore directories to the GlusterFS mount point:

    $ ln -s /gluster /var/lib/one/datastores/100
    $ ln -s /gluster /var/lib/one/datastores/101

Now when you start a new VM you can check that in the deployment file it points to the server configured in the datastore. Another nice feature is that storage will fall back to a secondary server in case one of them crashes. The information about replicas is automatically gathered, there is no need to add more than one host (and is currently not supported in libvirt).

If you are interested in this feature it’s a good time to download and compile master branch to test this feature. There is still some time until a release candidate of 4.6 comes out but we’d love to have some feedback as soon as possible to fix any problems that it may have.

We want to thank Netways people for helping us with this integration and testing of the qemu/gluster interface, and to John Mark from the Gluster team for his technical assistance.

OpenNebula at Runtastic

 

Runtastic_logoAbout 10 months ago our small ops team had to design a new infrastructure solution to run the services needed for our ecosystem. Until then we ran everything on root servers at a huge german webhoster. We knew that we wanted to go for our own infrastructure, but still be able to burst our load peaks into the cloud. We played around with several solutions on the market and finally wanted to meet some people at CEBIT 2013 to give us some hints for our decision.

To be honest we didn’t really consider OpenNebula at that time, but since the only person we could get hold of during lunchtime at CEBIT was Tino Vazquez from C12G we started to. A week later we fired up our first testing environment and 4 month later our productive OpenNebula Cluster started its work.

Why did we choose OpenNebula? Most of the software we use at runtastic is opensource and we wanted to set on KVM for that reason instead of some proprietary solution.  So we actually were looking for some management solution on top of KVM in the first step. However OpenNebula fullfills our requirements in several other terms as well.We wanted something simple and stable. We are a small team and have a lot of different services to maintain. We didn’t want to have an additional spot we need to focus on. What we can say up to now: We use OpenNebula, thats it. No big issues, no big efforts in performance tuning. The infrastructure runs and runs and runs … Another thing was, that have a lot of ideas in extending OpenNebula (integrate LXC, trigger chef bootstrap from within OpenNebula, make use of Netapp cloning features, …). We know ruby. OpenNebula is written in ruby. Perfect match.

 

hardware_install

At the moment we have a pretty straight forward solution based on two Netapp 2240 providing NFS shared storage and 28 Cisco UCS Servers running KVM on Ubuntu 12.04. Everything is connected via 10G Cisco Nexus Switches. We still have bare metal hardware for our databases (MySQL, MongoDB and Cassandra) and the services needed for OpenNebula itself, but everything else, like webservers, mobile endpoints and backend services is running in our cloud. All in all we run about 280 quite big (compared to a typical AWS machine) VMs consuming about 200 cores, 4TB memory and more than 20000 IOPS in average.

 

Everything in our infrastructure, also OpenNebula and KVM is set up automatically with Chef, so for us it is very easy to add new nodes to our cluster, or create new datastores.

Next step will be to integrate the VMware vCloud based IAAS solution of our datacenter provider to cover load peaks.

 

CentOS Dojo and Fosdem aftermath

This edition of the CentOS Dojo has been very intense. Besides being crowded with very interesting people and great conversations (as it’s customary in all Dojos) the hackathon went even better than we would have hoped. The following items were achieved:

  • CloudInit 0.7.4 100% supported by OpenNebula and CentOS . Big thanks to Sam Kottler for providing that package and assisting us with the process.
  • Initial set of systemd scripts for OpenNebula developed, will be published as soon as CentOS 7 is out.
  • OpenNebula-node-xen package developed. It will be added to the CentOS packages very shortly. Thanks to the Xen guys and to Johnny Hughes for his assistance with the kickstart file.

We also had the chance to meet new OpenNebula users, which  as usual provide great feedback and exciting comments. It is also worthy of mention the conversation we had with John Mark from the GlusterFS project, who besides providing excellent ideas and recommendations for Gluster, will work with us very shortly in an announcement!

dojo

The Fosdem has also been very exciting: interesting conversations around Cloud, interoperability, OpenNebula demos, storage solutions and new projects using OpenNebula that wil be announced very soon!

Big thanks to Karanbir Singh for organizing the Dojo and the hackathon, and for having us at the CentOS table of the Fosdem.

Stand by for a bunch of exciting announcements that have blossomed these past days!

OpenNebula for Virtual Desktops

In our experience as providers of private clouds based on OpenNebula, the single most common request among small and medium enterprises is the deployment of virtual desktops, both in terms of converting existing desktops and moving them to OpenNebula or for the creation of custom environments like computer classrooms for schools. This is, in hindsight, not difficult to explain: a cloud infrastructure brings a set of management advantages that are clearly perceived by end users that are frequently facing IT problems like blue screens, viruses and stability problems. Being able to move from one place to the other while maintaining the desktop VM active, rebooting into a previous snapshot before a virus infection or in general cloning a “good” master VM are substantial advantages especially for smaller companies or public administrations.

We found out that the combination of OpenNebula and KVM for the hypervisor to be especially convenient, and we deployed several small clouds serving small groups (5 to 10) desktops with great success. If you need to start from an existing desktop, the easiest approach is through an external software tool like VMware Converter, with the recommendation to avoid the installation of the usually enabled VMware tools (totally useless within KVM); apart from the use of Converter there are slightly more complex approaches based on tools like clonezilla (a good summary can be found here). The performance of converted machines is however not optimal, due to the lack of the appropriate paravirtualized drivers for I/O and network – so the next task is to convince Windows that it needs to install the appropriate drivers. To do so, download the latest Virtio binary drivers from this site, load the .iso image in OpenNebula and register it as a CDROM image. Then, create a small empty datablock with the following configuration:

Then create a new template for the Windows machine, linking as images the converted Windows disk, the small VD image and the Virtio ISO image. Set the network device as “virtio”, and reboot. After the completion of the boot process, enter into the Windows control panel, system and in the device window you will find a set of unidentified hardware devices- one for the virtual SCSI controller, one for the network card and a few additional PCI devices that are used to control the memory ballooning (the capability to pass real memory usage to the hypervisor, so that the unused memory can be remapped to something more useful). For each unidentified device perform a right click and install the drivers by selecting as source the virtio cdrom. Shutdown the machine and remove from the template the small VD disk and the iso image, and now you have a fast and accelerated Windows image ready for deployment.

Now that we have our VDI raw material, we can think about how to deploy it. In general, we identified three possible approaches:

The simplest approach is simply to load the VM images of each desktop inside of OpenNebula, assign a static IP address to each VM and connect using RDP from a remote device like a thin client or a customized linux distribution. The advantage of this approach is that RDP allows for simple export of local devices and USB ports; recent improvements to the protocol (RDP7 with RemoteFX, used in Windows 7 and 8) allows for fast multimedia redirection and several improved capabilities, already implemented in open source clients like FreeRDP. The simplicity of this approach is however hampered by the fact that this capability works only if Windows boots successfully, and if there is no interference in the login process.If something happens it is necessary to connect out-of-band to the console (for example using the integrated VNC console in Sunstone) and solve any problem that may prevent the successful startup of the virtual machine. This approach is also limited to Windows machines, so that if you have a mix with different operating systems you are forced to connect with different tools.

A more flexible approach is the use of the SPICE protocol. Originally created by Qumranet and released as open source after the acquisition of the company by Red Hat, currently integrated directly within KVM. It supports multimedia, USB redirection and several advanced features; it does have drivers for both Windows (here) and Linux (installing the xorg-video-qxl drivers). We found that several Linux distributions require a small additional file in the /etc/qemu directory called ich9-ehci-uhci-cfg (that can be found here) for USB redirection to work properly; after the addition, add to the Windows template the following libvirt snippet:

RAW=[
  DATA="<qemu:commandline>
     <qemu:arg value='-readconfig'/>
     <qemu:arg value='/etc/qemu/ich9-ehci-uhci.cfg'/>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='spicevmc,name=usbredir,id=usbredirchardev1'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=ehci.0,debug=3'/>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='spicevmc,name=usbredir,id=usbredirchardev2'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='usb-redir,chardev=usbredirchardev2,id=usbredirdev2,bus=ehci.0,debug=3'/>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='spicevmc,name=usbredir,id=usbredirchardev3'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='usb-redir,chardev=usbredirchardev3,id=usbredirdev3,bus=ehci.0,debug=3'/>
  </qemu:commandline>",
  TYPE="kvm" ]

to have 3 redirected USB channels. Start the Windows VM and connect through a suitable SPICE client like Spicy, and you will get your connection, audio and all your USB devices properly working:

http://www.linux-kvm.com/sites/default/files/usbredirect6-2.png

This approach works quite well – the VM is stable, performance within a LAN is quite good with no visible artifacts. USB redirection is stable, and it is possible to compile KVM with support for Smart Cards, useful for environments like hospitals or law enforcement where a smart card is used for authentication or for digital signatures.

The last approach is through a separate VM (again, inside of OpenNebula) that perform the task of “application publishing”, in a way similar to Citrix. We use Ulteo, a French software system that provides application publishing and management through an integrated web portal. You can connect Windows servers or linux machines; if you need to publish applications from Windows you can either use the traditional Terminal Server environment, or the much cheaper TSplus application that provides a similar experience. After installing the Ulteo DVD inside of an OpenNebula VM, you end up with a web interface to select the application you want to publish:


After the configuration you simply point your browser to the Ulteo portal interface, and you get a personalized desktop with all your Linux and Windows application nicely integrated.

For a more in-depth presentation, including specific I/O advice on hosting VDI-specific virtual machine, I hope you will join me during the first OpenNebulaConf in Berlin next week. See you there!