OpenNebula Dashboards with Graphite and Grafana

We at TeleData are operating several OpenNebula instances in our datacenters and offer public Cloud services like IaaS, PaaS and scalable Hosting and ISP-Services for our business customers in Germany.

A professional 24/7 monitoring our whole infrastructure is a substantially element for our customers. In this case technical solutions have to offer the ability to drill down the hierarchy to identify bottlenecks and do some crucial capacity management. Every OpenNebula user has to know his workload and has to plan his resources. At TeleData we use a N+1 redundancy to cover host errors and so on.

With this strategy you cannot allocate 100% of your available ressources and you have to monitor your current allocations. Also the possibility to identify your most loaded VMM hosts or your most utilized vDCs could be a valuable information to do system and resource management the right way.

In this little guide we will try to get some monitoring data in form of graphite metrics for further, deeper and last but not least, long-term monitoring.

We think it was quite easy to pull out the information of OpenNebula. Therefore we developed a little script which collects the wanted metrics and sends them to Graphite.

First a small diagram for a quick overview:

Diagram OpenNebula - Graphite - Grafana

 

Lets start.
(This guide assumes that you are already familiar with Graphite and Grafana)

 

  1. Gathering the wanted Information
    As advised in the OpenNebula Forum [1] we used the XML-Output of the common tools “onehost” and “onegroup” in the OpenNebula Controller/Daemon, executed by Cronjob every 5 Minutes. The script is Ruby based and uses common Gems like “nokogiri” for XML operations.
  2.  Send out metrics to Graphite
    The script generates Graphite metrics out of the desired information and pushes them to your Graphite server. Graphite stores the metrics according to your storage schema (the Syntax is Frequency:Retention)
  3.  Display / Graph your metrics
    With your time series data in Graphite you could use shiny tools like „Grafana“ [2] or „Dashing“ [3] to create informative and quite impressive Dashboards for your OpenNebula OPS Team. With some templating (included in our JSON-Exports)  you can unchain the power of Grafana.

See some examples:

 

All work is published at Github: one-graphite
If there are any questions or issues feel free to add your comment at the OpenNebula Forum [1].

In our opinion OpenNebula works like a charm. It is open and flexible. OpenNebula is one of the most mature, comprehensive and valuable Cloud-Stack available in the market. We mentioned that before: http://opennebula.org/opennebula-at-teledata/

Because of the lack of complexity it`s simple to enhance and to add features.
Of course our Graphite cluster is also powered by OpenNebula, like many other Services.

Have fun!

 

Links:

[1] https://forum.opennebula.org/t/long-term-statistics-and-capacity-management-for-opennebula-clouds/1886/3
[2] http://grafana.org/
[3] http://shopify.github.io/dashing/

LXC Containers for OpenNebula

Operating-system-level virtualization, a new technology that has recently emerged and is being accepted into cloud infrastructures, has the advantage of providing better performance and scalability than other virtualization technologies such as Hardware-Assisted Virtual Machine (HVM).

containersLinuX Containers (LXC) allow the usage of this technology by creating containers that resemble complete isolated Linux virtual machines on the physical Linux machine, all this by sharing the kernel with the virtual portion of the system. A container is a virtual environment, with its own process and network space. LXC makes use of Linux kernel Control Groups and Namespaces to provide the isolation. Containers have their own view of the OS, the process ID space, the file system structure, and the network’s interfaces. Since they use kernel features, and there’s no emulation of hardware at all, the impact on performance is minimal. Starting up and shutting down containers, as well as creating or destroying them are fairly quick operations. There have been comparative studies, such as LXD vs KVM by Canonical, which show advantages of LXD systems over KVM. LXD is built on top of LXC and uses the same kernel’s features, so performance should be the same.

hyp2Nowadays public cloud Infrastructure as a Service (IaaS) providers, like Amazon, only offer application based on Docker containers deployed on virtual machines. Docker is designed to package a single application with all of its dependencies into a standardized unit for software development but not to create a virtual machine. Only a few systems offer IaaS on bare metal container infrastructures. Joyent, for instance, is able to put to use all of the virtualization advantages that OS provides.

However, on private cloud scenarios OS virtualization technology hasn’t had quite the acceptance it should. Private cloud managers, like OpenStack and Eucalyptus, don’t offer the support needed for this type of technology. OpenNebula, is a flexible cloud manager, which has gained very good reputation over the last few years. Therefore, to strengthen the OS virtualization support in this cloud manager could be a key strategic decision.

This is why LXCoNe was created. It is a virtualization and monitoring driver for OpenNebula that comes as an add-on to provide OpenNebula with the ability to deploy LXC containers. It contributes to achieve better interoperability, performance and scalability in OpenNebula clouds. Right now, the driver is stable and ready for its release. It is currently being used in the data center from Instituto Superior Politécnico José Antonio Echeverría in Cuba, with great results. The team is still working on adding some more features, which are shown next, to improve the driver.

Features and Limitations

The driver developed has several features such as:

  • Deployment of containers on File-Systems, Logical Volume Managers (LVM) and CEPH.
  • Attachment and detachment of network interface cards and disks, both before creating the container or while it’s on.
  • Monitoring containers and node’s resources usage.
  • Powering off, suspending, stoping, un-deploying and rebooting running containers.
  • Supports VNC.
  • Supports snapshots when using File-Systems.
  • Limits container’s RAM usage.

It lacks the following features, on which we are currently working:

  • Container’s CPU usage limitation.
  • Containers live migration.

Virtualization Solutions

OpenNebula was designed to be completely independent from underlying technologies. When the project started the only supported hypervisors were Xen and KVM, it was not thought out to support OS-level virtualization. This probably influenced the way OpenNebula managed physical and virtual resources. Because of this and due to the difference between the two types of virtualization technologies there are a few things to keep in mind when using the driver. These are:

Disks

When you successfully attach a hard drive, this will appear inside the container, in /media/<Disk_ID>. For detaching the hard drive it must be placed inside the container in the same path previously explained. It cannot be in use, otherwise it will purposely fail.

Network interfaces (NIC)

If you hot-attach a NIC, it will appear inside the container, but without any configurations. It will be up and ready to be used, contrary to what happens when you specify NICs in the template and then create the virtual machine. In this case, the NIC will appear set up and ready to be used, unless you specifically want it to appear otherwise.

Installation

Want to try? The drivers are part of the OpenNebula Add-on Catalog. Installation process is fully and simply explained in this guide.

Contributions, feedback and issues are very much welcome by interacting with us in the GitHub repository or writing a mail:

José Manuel de la Fé Herrero: jmdelafe92@gmail.com
Sergio Vega Gutiérrez: sergiojvg92@gmail.com

 

New Support Services Coverage by OpenNebula Systems

OpenNebula Systems has announced an extension of its support services aimed at those OpenNebula users in production environments based on KVM that need to support the whole stack. Hence, support can be extended to include the Operating System of the virtualization nodes and the controller (which can be CentOS and/or Ubuntu), the hypervisor (libvirt and KVM), networking technologies as VXLAN/VLAN 802.1Q and Ceph as storage backend.

The supported components are part of the Open Cloud Reference Architecture, created by OpenNebula Systems from the collective information and experiences from hundreds of users and cloud client engagements. This reference documents software products, configurations, and requirements of infrastructure platforms recommended for a smooth OpenNebula installation.

This new support coverage is offered as an Add-on over the traditional OpenNebula support.

 

OpenNebulaConf 2016: Call for Speakers Open

Screen Shot 2015-12-21 at 3.36.51 PM

 

As you may already know, this year OpenNebulaConf is taking place in Barcelona again, on October 25-27. Everybody who wants to join as a speaker can now propose a talk via the online form at:

We are looking for content that is appropriate for people who are brand new to OpenNebula, experts, and everything in between. First Time Submitting?. Don’t Feel Intimidated. We know that the Cloud and open source community can be very intimidating for anybody who is interested in participating. We strongly encourage first-time speakers to submit talks.

If you are a OpenNebula practitioner, user, architect, devop, admin or developer and have something to share, we welcome your submission. Suggested topics will include:

– Latest developments in OpenNebula
– Research using OpenNebula
– User experiences and case studies using OpenNebula
– Best practices and tools
– Integration with other cloud, virtualization and data center components
– Any other topics that you feel are relevant to developers, users, researchers and other members of the community

Need inspiration?

Need help with or suggestions for your presentation? We’ve got lots of ideas and are happy to discuss your talk ideas before you submit them. Just reach out to us!

Schedule

The agenda will include a one-day pre-conference (October 25) with tutorials and hacking session, and a two-day conference (October 26 and 27) with keynotes, regular sessions, lightning talks and open sessions.

Benefits

Speakers will receive free admission, which includes:

– Attendance at all conference presentations
– Attendance at pre-conference tutorials and hacking sessions
– Coffee break during the morning and afternoon breaks
– Lunch on both conference days
– Dinner event on the first conference day
– Tapas dinner on the pre-conference day
– WiFi access

… and the opportunity to address a large audience of talented and influential cloud and open-source experts!

Guidelines

Deadline is April 29th 2016. Speaker selection notifications will go out no later than May 6, 2016 at 11:59PM CET.

Each presentation slot will be approximately 20 minutes long. Subsequently to each presentation a question and answer session is scheduled with the audience. The agenda will also include some lightning talks and some open sessions, too. You can also submit panels, laboratories or multi-presenter proposals.

If you want to get an idea of the past OpenNebulaConf sessions, including talks from companies such as CentOS, Runtastic, Puppet Labs, Cloudweavers, RedHat, Produban, Unity, Deutsche Post, please check our Youtube channel or download the presentations from our SlideShare account

We are looking forward to your talks!

Docker Swarm with OpenNebula

Following our series of posts about OpenNebula and Docker, we would like to showcase the use of Docker Swarm with OpenNebula.

Docker Swarm is native clustering for Docker. With Docker Swarm you can aggregate a group of Docker Engines (running as Virtual Machines in OpenNebula in our case) into a single Virtual Docker Host. Docker Swarm delivers many advantages, like scheduling, high availability, etc.

We continue with our approach of using Docker Machine with the OpenNebula Driver plugin, in order to deploy Docker Engines easily and seamlessly. Please make sure to follow the previous post, in order to have a fully functional Docker Machine working with OpenNebula.

As displayed in the following image, Docker Swarm will make a cluster out of a collection of Docker Engine VMs deployed in OpenNebula with Docker Machine:

docker-swarm-opennebula

Docker Swarm makes use of a Discovery Service in order to implement the cluster communication and discovery. The Docker project provides a hosted discovery service, which is appropriate for testing use cases, however, in our case, we will use Docker Machine to deploy an instance of Consul, in particular this Docker container for Consul.

NOTE: This guide is specific for KVM, if you would like to try this plugin out with the vCenter hypervisor, or with vOneCloud, there are a few small differences, so make sure you read this: Docker Machine OpenNebula plugin with vCenter. In particular you will need to use the option –opennebula-template-* instead of –opennebula-image-*.

The first step is to deploy it using Docker Machine:


$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 consul
$ docker $(docker-machine config consul) run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
$ CONSUL_IP=$(docker-machine ip consul)

Once it’s deployed, we can deploy the Swarm master:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 --swarm --swarm-master --swarm-discovery="consul://$CONSUL_IP:8500" --engine-opt cluster-store=consul://$CONSUL_IP:8500 --engine-opt cluster-advertise="eth0:2376" swarm-master

And now deploy swarm nodes:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 --swarm --swarm-discovery="consul://$CONSUL_IP:8500"--engine-opt cluster-store=consul://$CONSUL_IP:8500 --engine-opt cluster-advertise="eth0:2376" swarm-node-01

You can repeat this for as many nodes as you want.

Finally, we can connect to the swarm like this:

$ eval $(docker-machine env --swarm swarm-master)
$ docker info
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 2
Server Version: swarm/1.1.2
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
swarm-master: 10.3.4.29:2376
└ Status: Healthy
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.18-boot2docker, operatingsystem=Boot2Docker 1.10.2 (TCL 6.4.1); master : 611be10 - Tue Feb 23 00:06:40 UTC 2016, provider=opennebula, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-29T16:08:41Z
swarm-node-01: 10.3.4.30:2376
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.18-boot2docker, operatingsystem=Boot2Docker 1.10.2 (TCL 6.4.1); master : 611be10 - Tue Feb 23 00:06:40 UTC 2016, provider=opennebula, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-29T16:08:12Z
Plugins:
Volume:
Network:
Kernel Version: 4.1.18-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.043 GiB
Name: swarm-master

The options cluster-store and cluster-advertise are necessary to create multi-host networks with overlay driver within a swarm cluster.

Once the swarm cluster is running, we can create a network with the overlay driver

$ docker network create --driver overlay --subnet=10.0.1.0/24 overlay_net

and then we check if the network is running

$ docker network ls

In order to test the network, we can run an nginx server on the swarm-master

$ docker run -itd --name=web --net=overlay_net --env="constraint:node==swarm-master" nginx

and get the contents of the nginx server’s home page from a container deployed on another cluster node

docker run -it --rm --net=overlay_net --env="constraint:node==swarm-node-01" busybox wget -O- http://web

As you can see, thanks to the Docker Machine OpenNebula Driver plugin, you can deploy a real, production ready swarm in a matter of minutes.

In the next post, we will show you how to use OneFlow to provide your Docker Swarm with automatic elasticity. Stay tuned!

OpenNebula Newsletter – February 2016

Latest developments, events and future plans for the upcoming months from the OpenNebula project. Read this newsletter to keep up to date with your favorite Cloud Management Platform.

You might be interested in taking a look at the sponsorship opportunities for the next OpenNebula Conf 2016, due in October in Barcelona. Also it may be of your interest the projected OpenNebula TechDays for this year. Upcoming TechDays in March and April will be held in Madrid, Dallas and Toronto.

Technology

The OpenNebula team keeps working in OpenNebula 5.0, the next major upgrade that will incorporate a wealth of new features, keeping a smooth upgrade path. And no, this is no easy task folks. But we will make it happen.

OpenNebula 5.0 is a revolution around the corner. In a few weeks, a beta version will come out featuring the following highlights:

  • revamped Marketplace, now a first class citizen in OpenNebula, to import/export images to and from any datastore (including the new vCenter datastores)
  • vCenter storage management, including datastore selection and disk hotplug, a full integration with the new Marketplace functionality. Share VMDKs among OpenNebula instances!
  • fully integrated virtual router management (including default and transparent HA for routers). Link virtual networks together using a robust HA router, that will come out of the box in OpenNebula

5.0 will also feature a myriad of interface changes, including an extension of the Cloud View available operations, to iron out the wrinkles in the user experience and a long list of other features.

As you may have noticed, interest in Docker is surging. We see OpenNebula as the perfect IaaS support for Docker, and as such our strategy regarding integration of Docker ecosystem within OpenNebula is to follow a “container as a service” approach. This is containers within VMs, not replace KVM/ESX with containers to ensure we maintain strong multi-tenancy. First we want to integrate OpenNebula with Docker-Machine to be able to build a container environment within your cloud. This feature is very advanced, as you can see in blog post. The second step would be to integrate tightly with Docker Swarm to be able to create Swarms using the OneFlow component.

docker_arch

Community

Lots of exciting things are happening in the OpenNebula community. Collaborations, contributions, and yes! even praise. Let’s review the highlights.

LINBIT has developed a new set of drivers to give OpenNebula support for the excellent DRBD functionality. Check an informative demo in this video.

A new release of the Perl binding for the OpenNebula Cloud API (OCA) is also worth noting if Perl is your sysadmin scripting language of choice. No excuse to make all the chores automatically in your cloud infrastructure now!

A freshly brewed integration of OpenNebula with professional VDI platform has been announced by UDS Enterprise, to be available in the upcoming 2.0 release of their software.

Our friends at NodeWeavers never stop. Take a look at their new family addition. Cute is an understatement, and since it is based on OpenNebula, powerful another one.

World wide adoption of OpenNebula is a fact. If more proof is needed, this is an excellent one. OpenNebula has been translated to the Persian language! Check this out.

Overall, we are very proud of the OpenNebula community. OpenNebula won’t be half as good without you guys, keep it up!

Outreach

The next OpenNebula Conference in Barcelona in October 2016 has already been announced . If you are willing to attend and can save now the date you can take advantage of a 40% discount in your Conf tickets. More information is available from the event site. The Conf has already its first sponsor with StorPool. Welcome! Learn about the different sponsorship opportunities in the Conference web page. If you want to understand what all the OpenNebula Conference fuzz is about, check the last Conference material (talks, slides,pictures).

Two OpenNebula TechDays, from this year’s scheduled ones have already been hosted. The first on was the TechDay in Kuala Lumpur, at MIMOS, where we learned about the wide OpenNebula ecosystem that is fostering in Malaysia. We want to thank all the people at MIMOS who received us for making this possible. The second TechDay happened in Sofia, Bulgaria, hosted by StorPool. We also want to thank StorPool for the magnificent hosting and the excellent local promotion, which yielded an excellent number of attendees.

CcD9x9tUcAAWPuS

If you are interested in participating in (or hosting) any of these TechDays agenda let us know. The next TechDay will happen this month hosted by Rentalia, and it will be held in Madrid, Spain.

The Swiss Open Systems User Group would be gathering next 16th of June to discuss Cloud Computing. If you are around and want to learn more about Clouds and OpenNebula, drop by.

You may be interested in OpenNebula Systems training plans for 2016 for Europe and US. These courses are designed to train cloud administrators to properly operate an OpenNebula infrastructure. Please contact us if your would like to request training near you.

Members of the OpenNebula team will be present in the following events in upcoming months:

  • VMworld 2016 US, August 28 – September 1, Las Vegas (Mandalay Bay Hotel & Convention Center), Nevada, US.
  • VMworld 2016 Europe, October 17 – 20, Barcelona (Fira Barcelona Gran Via), Spain.

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.

Docker-Machine OpenNebula Plugin

Ever since the last post about Docker Machine things have evolved quite a bit, and we would like to keep you up to date!

Docker-Machine has changed its plugin architecture, and we have since then adapted to this new architecture and registered the OpenNebula plugin.

It’s very easy to install OpenNebula support for Docker Machine. You will need to have a fully working OpenNebula cloud, and it must be accessible from your client machine, you can use ONE_AUTH and ONE_XMLRPC to connect to OpenNebula. Read more here: OpenNebula Shell Environment.

NOTE: This guide is specific for KVM, if you would like to try this plugin out with the vCenter hypervisor, or with vOneCloud, there are a few small differences, so make sure you read this: Docker Machine OpenNebula plugin with vCenter. In particular you will need to use the option –opennebula-template-* instead of –opennebula-image-*.

You will also need to download a Boot2Docker image, don’t worry, we’ve got that covered! We have uploaded two images to the official OpenNebula MarketPlace prepared to be used as Docker Engine machines:

You can download either of those two images into your OpenNebula instance and use them for Docker Machine. Or, you can prepare your own image, using your favourite distribution, as long as it’s supported by Docker Machine and it has the latest OpenNebula Contextualization packages.

The following diagram visualizes how we are going connect from a local computer, the docker client, to a docker engine deployed in a provider: OpenNebula in our case.

docker_arch

Once the requirements have been fulfilled, let’s go ahead with this simple installation:

  • Step 1: Install Docker Machine in your client machine.
  • Step 2: Follow these instructions to build the driver again in the client machine. In a nutshell you will need to do the following:

    $ go get github.com/OpenNebula/docker-machine-opennebula
    $ cd $GOPATH/src/github.com/OpenNebula/docker-machine-opennebula
    $ make build
    $ make install

    However, for any clarifications, please make sure to read the Docker Machine OpenNebula Driver plugin guide.

Once you have installed it, you will be able to use Docker-Machine with OpenNebula as your backend:


$ docker-machine create --driver opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-data-size 10240 mydockerengine
$ eval $(docker-machine env mydockerengine)

As simple as that! This will start a new VM in OpenNebula, and it will be your new Docker Engine!

I would like to personally thank Marco Mancini for doing the new plugin integration and helping devising and designing this integration!

This is not the last you’ll hear from us. We will publish soon a guide on how to use Docker Machine to deploy Docker Swarm using OpenNebula as the backend. We also have some more surprises involving OneFlow. Stay tuned!

And now, go try it out and have fun!

Upcoming Cloud TechDays Madrid and Dublin

Today we are announcing that we have just opened the call for speakers and registration to the following Cloud TechDays:

Send us an email at events@opennebula.org if you are interested in speaking at one of the TechDays and register as soon as possible if you are interested in participating, seats are limited!.

logo_rentalia

trinity-common-use

If you missed our last posts, these are the OpenNebula Cloud Tech Days that we have already announced for 2016:

For more information on past events, please visit the Cloud Technology Days page

TechDays

Please send us an email at events@opennebula.org if you are interested in hosting a TechDays event.

We look forward to your answers!

StorPool to Sponsor OpenNebulaConf 2016

OpenNebula Conf 2016 is getting closer and we would like to share with you the first Platinum Sponsor,  StorPool Storage. You can meet them in the booths area during the coffee and lunch breaks. If your company/project is interested in sponsoring OpenNebulaConf 2016 there are still slots available.

StorPool is intelligent storage software that runs on standard servers and builds scalable, high-performance storage system out of those servers. It focuses on the block-level storage and excels at it. OpenNebula is the preferred cloud management system for StorPool. It is simple, yet powerful and works very well.  StorPool is already integrated with OpenNebula. The integration is performed as a new datastore driver in OpenNebula. With StorPool, OpenNebula clouds get exceptional storage bandwidth, IOPS and low latency. This allows provisioning more VMs per host which increases utilization and ROI. Combining both products also allows for seamless scalability in capacity and performance as well as increased reliability.

At the event StorPool will demonstrate the joint solution, answer any questions and will help customers to improve the design of their cloud.  If you want to participate in OpenNebula Conf and meet StorPool and other OpenNebula users, remember that you are still in time for getting a good price deal for tickets.

About StorPool Storage

StorPool is block-level storage software. It has advanced fully-distributed architecture and is arguably the fastest and most efficient block-storage software on the market today. StorPool is incredibly flexible and can be deployed in both converged setups (on compute nodes, alongside VMs and applications) or on separate storage nodes. More about StorPool at www.storpool.com or info@storpool.com

promoBanner2016

StorPool Storage Hosting the First OpenNebula TechDay in Bulgaria on February 25th

StorPool Storage is going to host the first OpenNebula event ever organized in Bulgaria. On February 25th OpenNebula and StorPool are joining forces for a special one-day community event full of useful workshops and presentations all focused on how to take the most out of your private cloud. Jaime Melis, co-founder of OpenNebula, is coming to Sofia for a hands-on tutorial on how to create and manage your own OpenNebula cloud.  The event is targeted at systems admins, cloud architects, system integrators, devops architects, solutions architects, data center admins, community members and non-members from Bulgaria and the region.

When: 25 February 2016
Time: 10am – 6 pm EET
Venue: Vivacom Art Hall
Address: 4 Gurko Street, Sofia 1000

Free Registration here. Seats are limited.

Call for speakers: Email events@opennebula.org to become one of them, you are invited to share cloud use cases and deployment experiences, introduce new integrations and ecosystem developments, or describe other related cloud open-source projects and tools.

StorPool Storage is looking forward to welcoming anyone interested in building their own OpenNebula cloud in Sofia! See you all soon!

About StorPool

StorPool integrated with OpenNebula in March 2015. Deploying OpenNebula for cloud management and StorPool for block storage any company can now enjoy an efficient IT infrastructure; more details: here. Hostings like METANET are already successfully using the combined solution. StorPool assisted METANET with designing and bringing up many details of the service such as backup and in improving behaviour of the cloud management system.