Cached (SSD) Storage Infrastructure for VM’s

Currently there seem to be three choices when it comes to where and how to store your virtual machine images, these would be:

  1. Local storage, either RAW images or Cooked (eg; QCOW2) format
  2. Remote storage, typically a shared and/or replicated system like NFS or Gluster
  3. Shared storage over dedicated hardware

There are “many” issues with each of these options in terms of latency, performance, cost and resilience – there is no ‘ideal’ solution. After facing this problem over and over again, we’ve come up with a fourth option:

  1. Cache your storage on a local SSD, but hold your working copy on a remote server, or indeed servers. Using such a mechanism, we’ve managed to eradicate all of the negatives we experienced historically other options.


  • Virtual machines run against SSD image caches local to the hypervisor
  • Images are stored remotely and accessed via TCP/IP
  • The Cache is LFU (*not* LRU) which makes it relatively ‘intelligent’
  • Bandwidth related operations are typically ‘shaped’ to reduce spikes
  • Cache analysis (1 command) will give you an optimal cache size for your VM usage
  • The storage server support sparse storage, inline compression and snapshots
  • The system supports TRIM end-to-end, VM deletes are reflected in backend usage
  • All reads/writes are checksummed
  • The database is log-structured and takes sequential writes [which is very robust and very quick]
  • Database writing is “near” wire-speed in terms of storage hardware performance
  • Live migration is supported
  • The cache handles Replica’s and will parallel write and stripe read (RAID 10)
  • Snapshot operations are hot and “instant” with almost zero performance overhead
  • Snapshots can be mounted RO on temporary caches
  • Cache presents as a standard Linux block device
  • Raw images are supported to make importing pre-existing VM’s easier

Which means…

In terms of how these features compare to traditional mechanisms, network bottlenecks are greatly reduced as the vast majority of read operations will be serviced locally, indeed if you aim for a cache hit rate of 90%, then you should be able to run 10x the number of VM’s as an NFS based solution on the same hardware (from an IO perspective) Write operations are buffered and you can set an average and peak rate for writing (per instance) so write peaks will be levelled with the local SSD acting as a huge [persistent] write buffer. (this write buffer survives shutdowns and will continue to to flush on reboot)

If you assume a 90% hitrate, then 90% of your requests will be subject to a latency of 0.1ms (SSD) rather then 10ms (HD) , so the responsiveness of instances running on cache when compared (for example) to NFS is fairly staggering. If you take a VM running Ubuntu Server 12.04 for example and type “shutdown -r now”, and time hitting the return key to when it comes back with a login prompt, my test kit takes under 4 seconds – as opposed to 30-60 seconds on traditional NFS based kit.

And when it comes to cost, this software has been designed to run on commodity hardware, that means desktop motherboards / SSD’s on 1G NIC’s – although I’m sure it’ll be more than happy to see server hardware should anyone feels that way inclined.

The software is still at the Beta stage, but we now have a working interface for OpenNebula. Although it’s not complete it can be used to create, run and maintain both persistent and non-persistent images. Note that although this should run with any Linux based hypervisor, every system has it’s quirks – for now we’re working with KVM only and using Ubuntu 13.10 as a host. (13.04 should also be Ok, but there are issues with older kernels so 12.xx doesn’t currently fly [as a host])

As of today we have a public rack-based testbed so we should be able to provide a demonstration within the next few weeks, so if you’re interested in helping / testing, please do get in touch => gareth [@]

OpenNebula Einführungs Webinar von NETWAYS

OpenNebula und NETWAYS verbindet bereits seit Jahren eine lange Freundschaft, die inzwischen auch in einer Premiumpartnerschaft resultierte. Heute habe ich, als NETWAYS-Mitarbeiter, die Ehre, als Gast-Blogger einen Artikel zu schreiben (Thanks OpenNebula!).

Als Open Source IT-Systemhaus unterstützen wir seit mehr als 15 Jahren Unternehmen bei der Einführung von Open Source Software und bieten hierfür unter anderem Consulting, Supportleistungen, Managed Services und eine breite Auswahl an Schulungen und Konferenzen an. In Zusammenarbeit mit C12G veranstalten wir z.B. im September (24.-26. Sept. 2013) die weltweit erste OpenNebula Conf.

Um noch mehr Open Source begeisterte zu erreichen, starten wir am Donnerstag, den 05. September 2013 um 14:00 Uhr unsere neue Webinar-Reihe.

Natürlich werden wir gleich mit einem Webinar über OpenNebula starten!

Die Registrierung findet direkt über unsere Website statt.

Wer also einen ersten Einblick in diese Open Source Lösung oder weitere Details sucht, ist hier genau richtig!

Wir freuen uns bereits auf beide Events und auf eine rege Teilnahme!

Grüße aus Nürnberg

Christian / NETWAYS GmbH

ONE User Group – Hungary

Egy 2013 tavaszán érkezett megtisztelő felkérés eredményeként örömmel jelentjük be, hogy ezennel elindult a Magyar OpenNebula Felhasználói Közösség weboldala. A Közösség azért jött létre, amire a neve is utal: hogy összehozzuk a magyar OpenNebula felhasználókat. A nemzeti ONE közösségek első hullámával egy időben induló magyarországi weboldallal célunk, hogy elsősorban a szoftvert, másodsorban felhő technológiát minél szélesebb körben ismertté és érthetővé tegyük a magyarországi felhasználók számára; és egy állandó, megbízható szakmai közösség kialakítása, amelyet a szoftver felhasználói és fejlesztői, kezdő, haladó és profi szinten egyaránt magukénak éreznek.
Szeretnénk, ha mi, a szoftver magyarországi felhasználói egy valódi közösséget alkotnánk, amely egy olyan fórum lenne, ahová mindnyájan bátran, bizalommal fordulhatunk véleményünkkel, kérdéseinkkel, élményeinkkel. Ha magyarul beszélő OpenNebula felhasználó vagy, ne habozz, keress meg bennünket a lenti elérhetőségeink bármelyikén!

Építsük együtt a Magyar OpenNebula Felhasználói Közösséget!


OPTIMIS Toolkit Now Available in the OpenNebula Ecosystem

OPTIMIS (Optimized Infrastructure Services) is an open source cloud computing research project that has recently published the final release of its toolkit (v3.0). The OPTIMIS toolkit is now available in the OpenNebula Ecosystem and can be downloaded from The code is also available on GitHub and in a publicly available SVN repository. The OPTIMIS project kicked off in 2010, co-financed by the EU’s FP7 framework program. The project is led by IT services company Atos and includes Umea Universitet, 451 Research, Universität Stuttgart (HLRS), ICCS, Barcelona Supercomputing Center (BSC), SAP, Fraunhofer-Gesellschaft, University of Leeds, Leibniz Universität Hannover, Flexiant, BT Group, City UNiversity London and Arsys.

The primary goal of the OPTIMIS project was to optimize cloud infrastructure services by producing an architectural framework and a development toolkit covering the full cloud service lifecycle (construction, deployment and operation). With its newly developed programming model, integrated development environment and deployment tools, OPTIMIS gives service providers the capability to easily orchestrate cloud services from scratch, run legacy apps on the cloud and make intelligent deployment decisions based on their – or their customers’ – preferences regarding trust, risk, eco-efficiency and cost (TREC). It supports end-to-end security and compliance with data protection and green legislation. The toolkit also allows for developing once and deploying services across different types of cloud environments – private, hybrid, federated or multi-clouds. The OPTIMIS operation tools are intended to simplify and automate the management of infrastructure, and aim to improve resource utilization efficiency.

OPTIMIS supports ‘best execution venue’ strategies. It is fundamentally a cloud-enabling technology that ultimately allows users to schedule and automate the delivery of workloads to the most suitable venues (internal or external) based on policies such as TREC. The OPTIMIS software tools are deployed in the datacenter, and are a complement to cloud management and orchestration platforms.

Most of the OPTIMIS components are open-source-based, primarily Apache, but not all of them. Joining the OpenNebula Ecosystem makes perfect sense for OPTIMIS.

Try out the toolkit and let the OPTIMIS team know what you think!

OpenNebula Cloud API: Amazon, OGF OCCI, OpenStack, Google Cloud, DMTF CIMI or vCloud?

Last week we launched a survey to collect feedback from our community regarding what is their preferred interface for cloud consumers and how we should invest our resources in cloud API enhancement and development. The survey was open for two days receiving feedback from almost 200 OpenNebula clouds.

Targeted to OpenNebula cloud administrators, our aim was firstly to have information about the level of use of the two cloud APIs offered now by OpenNebula, namely AWS and OGF OCCI. The results show that:

  • 38% do not expose cloud APIs, their users only interface through the Sunstone GUI
  • 36% mostly use the AWS API
  • 26% mostly use the OpenNebula’s OCCI API or the OCCI API offered by rOCCI

Then we asked how they would like us to invest our resources to enhance the Cloud APIs. The results show that:

  • 47% to enhance the existing AWS API implementation
  • 21% to enhance the existing OGF OCCI implementation
  • 10% to implement the OpenStack API
  • 10% to implement the vCloud API
  • 6% to implement the Google Cloud API
  • 6% to implement the DMTF CIMI API

We also received many valuable additional comments mostly stating that “AWS is the de-facto standard and an executive-friendly selling point”, “OGF’s OCCI is the only independent community-driven standard”“OpenStack API is still not stable and fully documented”, and “Google Cloud API represents a big opportunity in the medium term”.

Guided by these results, our plans for the near future are to enhance and extend our implementation of the AWS API and to offer OCCI compatibility through the rOCCI component. Of course, if your organization is interested in implementing one the less demanded Cloud APIs, we can provide you with the needed support.

Thanks a lot for your feedback!

OpenNebula User Group France

Le Projet OpenNebula a annoncé récemment un plan pour soutenir et promouvoir la création de groupe d’utilisateurs d’OpenNebula locaux dans différents pays. Le but de chaque communauté sera d’organiser régulièrement des réunions et des ateliers où les utilisateurs d’OpenNebula (développeurs, administrateurs, chercheurs et responsables IT) pourront se retrouver pour partager leurs expériences (aspects techniques, best pratices, cas d’utilisation) et développer leur réseau.

Pour ne pas rater cet appel qui est une opportunité unique pour développer l’échange entre les utilisateurs OpenNebula en France, j’ai pris l’initiative de travailler à mettre en place cette communauté. Pour rassembler les utilisateurs autour de ce projet, un groupe de discussion Google et une page WIKI ont été mis en place pour faciliter la diffusion des informations.

L’accès au groupe est ouverte à tous. Une fois qu’on aura atteint un nombre raisonnable d’inscrits, nous organiserons la première réunion qui marquera le démarrage effectif du groupe. Nous souhaitons à terme organiser des évènements bi-mensuels à travers différentes villes de France. En collaboration avec SysFera, nous nous proposons de mettre à disposition des locaux pour des évènements organisés sur Lyon. Si vous êtes intéressés à assurer le relais dans votre ville, merci de rejoindre le groupe de discussion pour nous en informer.

A bientôt!

OpenNebula User Group Italia

Il progetto OpenNebula ha annunciato di voler supportare e promuovere la creazione di gruppi utenti locali, con lo scopo di organizzare incontri per la condivisione di esperienze e idee, confrontarsi sui vari aspetti tecnici e su come promuovere attivamente l’utilizzo di OpenNebula nel nostro territorio.

Non potendo mancare a quest’appello, stiamo cercando di aggregare persone interessate a partecipare alle discussioni sul Gruppo Google dedicato, e con lo scopo di organizzare pizzate e talk a cadenza bimestrale da parte di chi vuol condividere la propria esperienza diretta con OpenNebula.

Ad oggi è  stata data disponibilità di locali fisici dove ritrovarsi e dove organizzare eventi a Pisa e a Milano (rispettivamente da Liberologico e Webfacilities), e siamo aperti anche ad altre possibili location e sponsorizzazioni per eventi in altre parti d’Italia.

Se anche tu vuoi dare il tuo contributo, iscriviti al gruppo e visita la pagina web appositamente dedicata contenente i riferimenti per metterti in contatto con gli altri utenti OpenNebula italiani.

A presto!

Community Extension Enables Infiniband in KVM Virtual Machines

The OpenNebula Project pleased to announce the availability of  a community developed VM MAD which enables Infiniband interfaces in KVM virtual machines. The driver is compatible with OpenNebula 3.x and 4.0. An SR-IOV enabled environment is a prerequisite for the driver.

For more information or to download the KVM-SRIOV VMM visit the drivers webpage.

Demonstration video:

Start Your New OpenNebula User Group!

The OpenNebula Project is happy to announce the support for the creation and operation of OpenNebula User Groups. An OpenNebula User Group is a gathering of our users in a local area to share best practices, discuss technical questions, network, and learn from each other.

If you are a passionate OpenNebula user and are interested in starting your own OpenNebula User Group, join our Community Discuss mailing list and let us know about your plans.

There is more information in the new User Groups section of our site.

We look forward to your User Group proposal!

Ceph Support in OpenNebula 4.0

Written in conjunction with scuttlemonkey (Patrick McGarry), of the Ceph Project, and cross-posted on the Ceph blog.

scuttlemonkey: “The Ceph team has been extremely blessed with the number of new people who choose to become involved with our community in some way. Even more exciting are the sheer numbers of people committing code and integration work, and the folks from OpenNebula are a great example of this in action.

At the end of February, one of the OpenNebula developers reached out to let us know that their integration work with Ceph was nearly complete. Below you can find a brief overview of how Ceph behaves in an OpenNebula environment, as well as a link to how to get it set up. Read on for details!”

It’s worth noting that the 4.0 release is still in beta and there was a bug discovered in the Ceph driver. Thankfully this has a workaround noted in the doc as well as a fix already committed for the release. If you have any issues feel free to stop by the #opennebula channel and let them know.

OpenNebula continues with its growing support of new storage technologies. OpenNebula 4.0 comes with a fresh integration with Ceph, an impressive distributed object store and file system.

OpenNebula provides an interface for Ceph RBDs (RADOS Block Device), which allows registering images directly in a Ceph pool, and running VMs using that backend.

There is an extensive Ceph for OpenNebula guide, but it can be summarized as follows:

  • OpenNebula worker nodes should be part of a working Ceph cluster.
  • The ”one” Ceph pool should be available (the name is configurable).
  • Use Libvirt/KVM as the hypervisor. Xen is not yet supported.

Once we have that up and running using it is extremely simple!

  1. Make sure we have the ”one” Ceph pool
    $ ceph osd lspools
    0 data,1 metadata,2 rbd,3 one,
  2. Create a Ceph datastore
    $ cat
    NAME      = ceph
    DS_MAD    = ceph
    TM_MAD    = ceph
    HOST = ceph0
    POOL_NAME = one
    $ onedatastore create
    ID: 101
  3. Register an image
    $ oneimage create --name centos-6.4 --path /tmp/centos-6.4.img -d ceph
    ID: 4
  4. Run your VM
    $ onevm create --name centos-6.4 --cpu 1 --memory 768 --nic 1 --disk centos-6.4 --nic net_172
    ID: 10

What happens behind the scenes is that OpenNebula interacts with the Ceph cluster and clones the base image. The Libvirt/KVM deployment file uses that clone image as the OS:

    <disk type='network' device='disk'>
        <source protocol='rbd' name='one/one-4-10-0' />
        <target dev='hdb' bus='ide'/>

All the image handling and manipulation (cloning, renaming, removing, etc…) is performed in a specific server defined in the datastore template, in our case ”HOST = ceph0”, using the ”rbd” capabilities of ”qemu-img” the
registration of new images.

scuttlemonkey: “Thanks to the OpenNebula team for hammering out this integration, we love to see new use cases for Ceph!

If you have your own use case story, we would love to hear about it (and share it with the world). Feel free to drop us your thoughts in the form of a suggested blog title, raw data that you wish us to write some prose about, or a fully formed blog post that we can push out to the community. Thanks, and happy Ceph-ing.”