oneInsight: A 2D-Load Visualization Addon for OpenNebula-Managed Hosts

I’m pleased to announce oneInsight, a visualization addon for OpenNebula that allows users to have at-a-glance, an insight of the load of managed hosts. It provides various kinds of load mappings, that currently include the following metrics:

  • CPU used by OpenNebula-managed virtual machines;
  • Memory used by managed virtual machines;
  • Effective CPU used by all system processes, including processes outside of managed virtual machines;
  • Effective memory used by all system processes.

Here is a screenshot showing an overview of CPU used.

Screenshot of oneInsight


oneInsight enables many benefits, as well for cloud operators than for business managers:

  • Provides a simple and comprehensible load charting that allows you to have at-a-glance an accurate insight on how servers are loaded, so to let you plan migrations and capacity upgrading if necessary;
  • Provides, via tooltip and popup, details about each server in zero or one click;
  • High class visualization that saves you from command line output;
  • Lightweight HTML/Javascript stack that can be deployed on any server within your IT infrastructure, just need a valid OpenNebula user account and a network access to OpenNebula server.

How oneInsight Works

oneInsight works out-of-the-box on the vast majority of Linux operating systems, subject to have the following tools installed:

  • curl command line interface
  • The Bash interpreter
  • The cron time-based job scheduler
  • A Web server like Apache and nginx, even the python SimpleHTTPServer module just works fine

Read the documentation to get started.

What Next & Contributions

oneInsight is a new project, and there is a lot of things concerning data visualization in OpenNebula. Contributors are welcome, we apply the Github Pull Request model for contributions in code and documentation. Stay tuned.

OpenNebula User Group France

Le Projet OpenNebula a annoncé récemment un plan pour soutenir et promouvoir la création de groupe d’utilisateurs d’OpenNebula locaux dans différents pays. Le but de chaque communauté sera d’organiser régulièrement des réunions et des ateliers où les utilisateurs d’OpenNebula (développeurs, administrateurs, chercheurs et responsables IT) pourront se retrouver pour partager leurs expériences (aspects techniques, best pratices, cas d’utilisation) et développer leur réseau.

Pour ne pas rater cet appel qui est une opportunité unique pour développer l’échange entre les utilisateurs OpenNebula en France, j’ai pris l’initiative de travailler à mettre en place cette communauté. Pour rassembler les utilisateurs autour de ce projet, un groupe de discussion Google et une page WIKI ont été mis en place pour faciliter la diffusion des informations.

L’accès au groupe est ouverte à tous. Une fois qu’on aura atteint un nombre raisonnable d’inscrits, nous organiserons la première réunion qui marquera le démarrage effectif du groupe. Nous souhaitons à terme organiser des évènements bi-mensuels à travers différentes villes de France. En collaboration avec SysFera, nous nous proposons de mettre à disposition des locaux pour des évènements organisés sur Lyon. Si vous êtes intéressés à assurer le relais dans votre ville, merci de rejoindre le groupe de discussion pour nous en informer.

A bientôt!

SVMSched: a tool to enable On-demand SaaS and PaaS on top of OpenNebula

SVMSched [1] [2] is a tool designed to enable on-demand SaaS clouds on virtualized infrastructures, and can also be easily set up to support PaaS clouds. SVMSched can be used to build cloud platforms where a service is deployed to compute a user-given dataset with a predefined application based on a given hardware requirements (CPU, memory). In such a context, SVMSched seamlessly and automatically creates a custom virtual computing environment to run the service on-the-fly. Such a virtual computing environment is built to start the execution of the service at the startup, and is automatically destroyed after the execution, freeing up allocated resources.

Benefits of SVMSched

  • Configuration-based On-demand Cloud Services: A SVMSched cloud is based on a single configuration file in which you define a set of software services that you wish to provide from your virtualized infrastructure. This configuration file also supports parameters to connect to the OpenNebula server, scripts and data necessary to automatically build virtual environments to run services, etc.
  • Automatic provisioning and high-level abstraction of virtual machines: After deploying SVMSched in your cloud infrastructure, you don’t longer need to manipulate virtual machine templates. Actually, to run a service you only need to make a simple request in the form of “I want a virtual machine with 4 CPUs and 512 MB of memory to compute a given set of data with a specific application”. Then, SVMSched does the rest for you (prepare the virtual machine’s image, instantiate the virtual machine, deploy and start the virtual machine on a node it selected seamlessly, start the execution of the service within the virtual machine, shut down the virtual machine when the execution is completed).
  • Scheduling: SVMSched enables advanced scheduling policies such as task prioritization, best-effort along with automatic preemption and resuming (plus migration, where required), resource sharing, etc.
  • Remote Data Repository: SVMSched is designed to allow you to define shared data repositories on the network that can be mounted automatically within the file system of virtual machines at startup, before starting the execution of service. Such a repository can be useful to store binaries, and any other data required by the compute tasks. It thus provides a mechanism to ease the handling of input and output data. Hence, you can avoid handling large virtual machine images (requiring large time of setting up), while minimizing the risk of losing data and computation already done if a virtual machine failed unexpectedly.

Integration Architecture

The figure below shows the architecture for integrating OpenNebula with SVMSched. In brief, SVMSched :

  • Works as a drop-in replacement for the OpenNebula’s default scheduler (mm_sched).
  • Enables a specific socket interface managed by a listening daemon. The socket works over the IP protocol, thereby enabling the possibility to have remote clients.
  • Enables a built-in UNIX-like command line client. The client can be located in a different server than the SVMSched daemon.
  • Communicates with OpenNebula through the XML-RPC interface. SVMSched and OpenNebula can be hosted on different servers.
  • Relies on a single XML configuration file; Not need to manipulate virtual machine templates.

SVMSched Integration Architecture

Use cases

Without being exhaustive, these are some situations where SVMSched can bring you significant added values.

Automatic deployments for on-demand PaaS/SaaS services

Typical contexts are: executing services based on computational applications (data/input => processing => results/output), resource/platform leasing, etc. Software testing (validation testing, non-regression testing, etc.) is a typically example. In such a context, the infrastructure behaves as a dynamic virtual cluster, in which virtual machines are created and deployed on-the-fly for specific and limited lifetimes after which they disappear.  Each virtual machine has a specific/custom configuration (software stack, amount of CPU, memory size). After its lifetime, depending on (determined by) the time required to run the service, the virtual machine is automatically destroyed to free allocated resources. The following points explain the few things you need to set up such a cloud :

  1. Define one or more services in the SVMSched’s configuration file according to your needs. E.g. a service can consist in running a specific unit test script.
  2. If necessary, set up a data repository (a shared network file system) in which binaries and data required to run services will be located. We recall that SVMSched enables to mount this repository automatically into the file system of virtual machines.
  3. Finally, running a service is a straightforward task. For example, the following command allows you to run an instance of the service named “example-service1″ using a virtual machine having 2 CPUs and 1024MB of memory. In the example, we assume that the input data is located in /data/repository/file.dat,  specified with the -a option.
  4. $ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
              -r <example-service1> -a /data/repository/file.dat

On-demand Infrastructure for Training

See here for example. In such a situation SVMSched can be especially useful to avoid setting up multiple templates of virtual machines manually, while being able to create virtual machines with various hardware and software configurations.  Indeed, this can be time-consuming. For example, assume that you have to deal with several trainings, each requiring a practical session (e.g. Parallel programming, Web application deployment, etc.). It appears evident that the software and the hardware requirements of virtual machines need for the different practical sessions can vary considerably, and may require to set a lot of virtual machine templates. You may also need that at the end of each practical session (given by a duration), virtual machines be destroyed automatically. Using SVMSched, only four straightforward things are needed to set up such an infrastructure:

  1. Define each practical session as a service in the SVMSched’s configuration file.
  2. For each service, set a data repository in which specific software binaries and libraries and data required for that practical session will be located. We recall that SVMSched enables to mount this repository automatically into the file system of virtual machines.
  3. For the main program (executable), use a simple script that enforces a sleep for a given duration.
  4. Finally, for each student who should attend at a given session you only need to request a virtual machine with specific hardware requirements (memory and CPU), for a given duration. The example below show how to create a virtual machine with 2 CPUs, 1024 MB of memory and a lifetime of 3 hours.  HINT: If all virtual machines need the same the requirements, you can use a loop according to the number of attendees.
  5. $ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
              -r <training-service-id> -a 7200

Co-hosting of production and development services

A typical case is when you want to use idle resources of a production infrastructure to carry out some development tasks such as software testing (init tests, Non-Regression Testing or NRT, etc.). SVMSched allows you to distinguish production tasks (prioritized and non-preemptable) to best-effort tasks (non-prioritized and preemptable).  So, when operating, SVMSched can automatically preempt best-effort jobs when there are not resources available to run queued production tasks. Preempted jobs are automatically resumed as soon as resources become idle. The decisions of preempting and resuming are took autonomously. Assuming that you already set up a SVMSched cloud, the following commands show how to run two jobs in production and best-effort modes, respectively.

$ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
          -r <prod-service-id> -a /data/repository/file1.dat [-t prod]
$ svmschedclient [-H svmsched@server=localhost] --vcpu=2 --memory=1024 \
          -r <nrt-service-id> -a /data/repository/file2.dat -t beff


SVMSched (Smart Virtual Machine Scheduler) is a tool designed to enable and ease the set-up of on-demand SaaS and PaaS services on top of OpenNebula. SVMSched is open source and available for free downloading [1]. However, SVMSched is still at a development stage, not yet production-ready.  Being an ongoing project, feedbacks and collaborations are appreciated. So, don’t hesitate to contact authors if you have questions, suggestions, comments, etc.


[1] SVMSched Home.

[2] Rodrigue Chakode, Blaise-Omer Yenke, Jean-Francois Mehaut. Resource Management of Virtual Infrastructure for On-demand SaaS Services. In CLOSER2011: Proceedings of the 1st International conference on Cloud Computing and Service Science. Pages 352-361. Noordwijkerhout, Netherlands, May 2011.