Last week, we participated in the 2013 edition of ISC Cloud. The event addressed currently popular topics such as High Performance Computing (HPC) as a service, industrial and scientific application software in the Cloud, new software licence models, security in the Cloud, computing power and data protection.

The event started with a very successful hands on tutorial on “Building your Cloud for HPC, here and now, in 3 hours!”. During the tutorial, the attendees were able to deploy, manage and operate a 2-nodes OpenNebula cloud in their own laptops.

We have uploaded all the material of the tutorial, for all of those who couldn’t be there. You can find the VirtualBox images used in the tutorial and the slides here:

During the afternoon Ignacio M. Llorente described the most demanded features for building HPC and science clouds and illustrated how OpenNebula effectively addresses these challenges of cloud usage, scheduling, security, networking and storage. If you missed this talk you can find it here:

See you next time!

What an interesting week we’ve had at EGI TF and the Cloud Interoperability Week! We had the opportunity to meet old friends, shake hands with users that we only knew by email, and also had the chance to thank some of our community contributors personally.

Most of the people we spoke with were already OpenNebula users, so we had a great time hearing from their use cases, customizations and gathering feature requests.

The presentation was followed by an interesting session of questions and answers, where different cloud technologies were represented. You can get the presentation slides from our slideshare account.

We were a bit concerned about the small time slot assigned to the hands-on tutorial on Thursday, but things went smoothly and practially all attendants managed to get their own OpenNebula cluster with 2 nodes and a couple of VMs. They even had time to configure the rOCCI server and play a bit with it.

See you next time!

In our experience as providers of private clouds based on OpenNebula, the single most common request among small and medium enterprises is the deployment of virtual desktops, both in terms of converting existing desktops and moving them to OpenNebula or for the creation of custom environments like computer classrooms for schools. This is, in hindsight, not difficult to explain: a cloud infrastructure brings a set of management advantages that are clearly perceived by end users that are frequently facing IT problems like blue screens, viruses and stability problems. Being able to move from one place to the other while maintaining the desktop VM active, rebooting into a previous snapshot before a virus infection or in general cloning a “good” master VM are substantial advantages especially for smaller companies or public administrations.

We found out that the combination of OpenNebula and KVM for the hypervisor to be especially convenient, and we deployed several small clouds serving small groups (5 to 10) desktops with great success. If you need to start from an existing desktop, the easiest approach is through an external software tool like VMware Converter, with the recommendation to avoid the installation of the usually enabled VMware tools (totally useless within KVM); apart from the use of Converter there are slightly more complex approaches based on tools like clonezilla (a good summary can be found here). The performance of converted machines is however not optimal, due to the lack of the appropriate paravirtualized drivers for I/O and network – so the next task is to convince Windows that it needs to install the appropriate drivers. To do so, download the latest Virtio binary drivers from this site, load the .iso image in OpenNebula and register it as a CDROM image. Then, create a small empty datablock with the following configuration:

Then create a new template for the Windows machine, linking as images the converted Windows disk, the small VD image and the Virtio ISO image. Set the network device as “virtio”, and reboot. After the completion of the boot process, enter into the Windows control panel, system and in the device window you will find a set of unidentified hardware devices- one for the virtual SCSI controller, one for the network card and a few additional PCI devices that are used to control the memory ballooning (the capability to pass real memory usage to the hypervisor, so that the unused memory can be remapped to something more useful). For each unidentified device perform a right click and install the drivers by selecting as source the virtio cdrom. Shutdown the machine and remove from the template the small VD disk and the iso image, and now you have a fast and accelerated Windows image ready for deployment.

Now that we have our VDI raw material, we can think about how to deploy it. In general, we identified three possible approaches:

The simplest approach is simply to load the VM images of each desktop inside of OpenNebula, assign a static IP address to each VM and connect using RDP from a remote device like a thin client or a customized linux distribution. The advantage of this approach is that RDP allows for simple export of local devices and USB ports; recent improvements to the protocol (RDP7 with RemoteFX, used in Windows 7 and 8) allows for fast multimedia redirection and several improved capabilities, already implemented in open source clients like FreeRDP. The simplicity of this approach is however hampered by the fact that this capability works only if Windows boots successfully, and if there is no interference in the login process.If something happens it is necessary to connect out-of-band to the console (for example using the integrated VNC console in Sunstone) and solve any problem that may prevent the successful startup of the virtual machine. This approach is also limited to Windows machines, so that if you have a mix with different operating systems you are forced to connect with different tools.

A more flexible approach is the use of the SPICE protocol. Originally created by Qumranet and released as open source after the acquisition of the company by Red Hat, currently integrated directly within KVM. It supports multimedia, USB redirection and several advanced features; it does have drivers for both Windows (here) and Linux (installing the xorg-video-qxl drivers). We found that several Linux distributions require a small additional file in the /etc/qemu directory called ich9-ehci-uhci-cfg (that can be found here) for USB redirection to work properly; after the addition, add to the Windows template the following libvirt snippet:

RAW=[
  DATA="<qemu:commandline>
     <qemu:arg value='-readconfig'/>
     <qemu:arg value='/etc/qemu/ich9-ehci-uhci.cfg'/>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='spicevmc,name=usbredir,id=usbredirchardev1'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=ehci.0,debug=3'/>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='spicevmc,name=usbredir,id=usbredirchardev2'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='usb-redir,chardev=usbredirchardev2,id=usbredirdev2,bus=ehci.0,debug=3'/>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='spicevmc,name=usbredir,id=usbredirchardev3'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='usb-redir,chardev=usbredirchardev3,id=usbredirdev3,bus=ehci.0,debug=3'/>
  </qemu:commandline>",
  TYPE="kvm" ]

to have 3 redirected USB channels. Start the Windows VM and connect through a suitable SPICE client like Spicy, and you will get your connection, audio and all your USB devices properly working:

http://www.linux-kvm.com/sites/default/files/usbredirect6-2.png

This approach works quite well – the VM is stable, performance within a LAN is quite good with no visible artifacts. USB redirection is stable, and it is possible to compile KVM with support for Smart Cards, useful for environments like hospitals or law enforcement where a smart card is used for authentication or for digital signatures.

The last approach is through a separate VM (again, inside of OpenNebula) that perform the task of “application publishing”, in a way similar to Citrix. We use Ulteo, a French software system that provides application publishing and management through an integrated web portal. You can connect Windows servers or linux machines; if you need to publish applications from Windows you can either use the traditional Terminal Server environment, or the much cheaper TSplus application that provides a similar experience. After installing the Ulteo DVD inside of an OpenNebula VM, you end up with a web interface to select the application you want to publish:


After the configuration you simply point your browser to the Ulteo portal interface, and you get a personalized desktop with all your Linux and Windows application nicely integrated.

For a more in-depth presentation, including specific I/O advice on hosting VDI-specific virtual machine, I hope you will join me during the first OpenNebulaConf in Berlin next week. See you there!

Next week, just before our first OpenNebulaConf in Berlin, we will be busy at ISC Cloud 2013 in Heidelberg. This conference brings together developers, users, managers and decision makers from industry,research, and development to give them the opportunity to find out about the newest trends in Cloud Computing, and participate in intensive and valuable discussions. The event will address currently popular topics such as High Performance Computing (HPC) as a service, industrial and scientific application software in the Cloud, new software licence models, security in the Cloud, computing power and data protection.

We will participate with a hands-on tutorial and an invited talk:

  • The tutorial “Building your Cloud for HPC, here and now, in 3 hours!” will cover the process of building a private cloud using OpenNebula with a special focus on configuring and operating the cloud instances for the execution of virtualized computing services. The attendees will build, configure and operate their own OpenNebula cloud!.
  • The invited talk “Cloud Architectures for HPC – Industry Case Studies” will describe the most demanded features for building HPC and science clouds, and will illustrate using real-life case studies from leading research and industry organizations how OpenNebula effectively addresses these challenges of cloud usage, scheduling, security, networking and storage. .
C12G Labs is sponsoring ISC Cloud 2013. It would be great to see you in Heidelberg on Monday!

There is a new guide available which assists you in the process of deploying a high available cluster for OpenNebula using Red Hat Cluster Suite. This guide complements the Virtual Machines High Availability Guide about failover protection against hardware and operating system outages within your virtualized IT environment.

The new guide includes a detailed description of the following things:
  • General high available OpenNebula deployment architecture
  • Installing the Cluster Software
  • Managing the Cluster
  • Fencing
  • Fail-over events

Enjoy!

As stated by the European Commission, OpenNebula has played an important role in driving and supporting the transition to cloud computing. We are convinced that OpenNebula is going to have a significant weight in modelling the data center of the future, and the high quality list of speakers of the OpenNebula Conference 2013 seems to agree with us.

The Conference, to be held in Berlin in September, 24-26, is just a week away. This is the perfect place to learn about practical cloud computing, useful for researchers, developers, administrators, integrators, architects, executives and IT managers tackle their computational and business challenges.

Registration is closing soon, but if this appeals to you, there are still a few seats left. Don’t waste another second and register now!

Next week we will be busy at the EGI Technical Forum & Cloud Interoperability Week events in Madrid, Spain.

If you are attending, make sure you don’t miss these presentations:

Oh, and we will be around in our booth (#16), so come and see us to talk about HPC, Cloud Computing, and play with our live OpenNebula demo.

See you there!

The first day of the conference we are going to have a couple of activities that I’m sure you’ll be interested in. There is one tutorial for people that wants to learn how to deploy and use OpenNebula and in parallel we will have free form hacking session.

This hacking session is meant for people that already has OpenNebula deployed and knows how to use it. There you can catch up with OpenNebula developers and have conversations that are a bit hard to have in the mailing list. It is also a great place to meet other people that may be doing things similar to you or have already sorted out some of the problems you may have. Here are some ideas on what you can do in the hacking session:

  • Ask about some new feature that is coming in new releases
  • Get help modifying Sunstone interface for your company
  • Integrate your billing system with OpenNebula accounting
  • Create a new Transfer Manager driver that knows how to talk to your SAN
  • Migrate a driver you’ve made for an old OpenNebula version to the newest one
  • Optimize your OpenNebula deployment

But you can also help us with the project! For example:

  • Discuss about some feature you may want to have included
  • Help improving or developing a new feature for OpenNebula
  • Give advice or add new documentation
  • Bug hunting!

This session will be held the first day (September 24) from 2pm to 6pm but we will be available the whole conference. In case there’s no time the first day or you want to talk to us any other day just come and say hi!

See you in Berlin!

OpenNebula has a very flexible approach to user and resource management. You can organize your users in groups or VDCs, allow them to manage their own resouces with permissions, restrict how much resources users or groups can consume, or implement intricate use cases with ACL rules.

For OpenNebula 4.4. we will also introduce secondary groups. These will work in a similar way to the unix groups: users will have a primary group, and optionally several secondary groups. This new feature is completely integrated with the current mechanisms mentioned above allowing, for example, to perform the following actions:

  • The list of images visible to a user contains all the images shared within any of his groups.
  • You can deploy a VM using an Image from one of your groups, and a second Image from another group.
  • New resources are created in the owner’s primary group, but users can later change that resource’s group.
  • Users can change their primary group to any of their secondary ones.

And, as always, secondary groups can be easily managed through our Sunstone web interface:

Stay tuned for the beta release, we’ll be happy to get your feedback!

As you may know, the lineup and agenda for the first OpenNebula Conference (due this 24-26 September in Berlin) is already closed. These high quality contents ensure that the conference is the perfect place to learn about Cloud Computing, and to understand how industry leaders of different sectors are using OpenNebula in their datacenters.

There is however still a chance to contribute to the conference, if you are interested. The lightning talks are 5 minute plenary presentations focusing on one key point. This can be a new project, product, feature, integration, experience, use case, collaboration invitation, quick tip, or demonstration. This session is an opportunity for ideas to get the attention they deserve. There are still several lightning talks slots available, so now is the time to register and send us your proposal!