OpenNebula Conf 2016: Agenda Available

Screen Shot 2016-06-08 at 2.43.01 PM

The OpenNebula Project is proud to announce the final agenda and line-up of keynote speakers for the fourth OpenNebula Conference to be held in Barcelona from the 24 to the 26 of October. Guided by your feedback from previous editions, we have changed the traditional format of our conference this year and have included more community sessions to learn and network.

Keynotes

The agenda includes four keynote speakers:

Community Sessions 

We had a big response to the call for presentations. Thanks for submitting a talk proposal!. Although all submissions were of very high quality and merit, only a small amount of abstracts will be presented. Unlike previous editions, we will have a single track with 10-minutes talks, to keep all the audience focused and interested. We have given our very best to get the perfect balance of topics. We will have talks by Unity Technologies, StorPool, LINBITNetWays

Hands-on Workshops 

We will have four 90-minute hands-on workshops, where some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the main subsystems in OpenNebula:

  • Networking, NFVs and SDNs
  • Storage
  • Hypervisors and Containers
  • Security, Federation & Hybrid Hands-on Workshop

These hans-on sessions will also include 5-minute lightning talks focusing on one key point. If you would like to talk in these sessions, please contact us!

Pre-conference Tutorials 

This year we will have two pre-conference tutorials:

  • Introductory Hands-on Tutorial
  • Advanced Hands-on Tutorial

Besides its amazing talks, there are multiple goodies packed with the OpenNebulaConf registration. You are still in time for getting a good price deal for tickets. There is a 40% discount until July 15th.

Also, your company may be interested in the sponsorship opportunities for OpenNebulaConf 2016, there are still some sponsorship slots available. Current sponsors of the OpenNebulaConf 2016 include StorPool and LINBIT as Platinum Sponsor, NodeWeaver as Gold Sponsor and Terradue and Todo En Cloud as Silver Sponsor.

We are looking forward to welcoming you personally in Barcelona!.

 

OpenNebula 5.0 Call for Translations

Dear community,

As OpenNebula 5.0 comes closer, we would like to launch a call for translations of our web-based user interface: Sunstone.

It is very easy: the existing translations can be updated and new translations submitted through our project site at Transifex

https://www.transifex.com/opennebula

Translations reaching a good level of completion will be included in the official final release of OpenNebula. Deadline for translations is Friday, 10th of June.

Thanks for your collaboration!

OpenNebula in OpenExpo 2016, Madrid

This next 2nd of June, OpenNebula will proudly be a gold sponsor of the third edition of OpenExpo 2016, to be held in Madrid, Spain. The objective of this event is to promote and evaluate solutions and tendencies in the FLOSSS industry.

OpenExpo Web Page

OpenNebula will have a booth in which members of the project will be presenting the new features available in the upcoming release, 5.0 Wizard; as well as give two talks about the OpenNebula technology:

If you are around Madrid next week and want to learn and discuss OpenNebula, come and join us in OpenExpo!

 

Agenda for the Upcoming Cloud TechDay in Ede, NL

Next week, on the 13th of May 2016, BIT.nl will host a new edition of an OpenNebula Cloud TechDay.

This TechDay will feature a 4-hour hands-on tutorial in which you will learn how to install and configure an OpenNebula Cloud from scratch. The presentations in the afternoon will be focused in Ceph. We want you to learn as much as possible about Ceph best-practices and how to use it with your OpenNenbula Cloud.

The agenda for the afternoon is:

  • Object scale out with Eternus CD10000 by Walter Graf and Frits de Kok from Fujitsu.
    Introduction to object storage, Ceph concepts and internals and how Fujitsu managed to overcome administrative challenges involved in running a Ceph cluster.
    Fujitsu and BIT will be giving a demo of Fujitsu CD10000 and OpenNebula.
  • Building the Dutch National Archive with Ceph by Wido den Hollander, Part of the Ceph Board.
    The Dutch National Archive has chosen Ceph to store their data on in Groningen, The Netherlands. Together with ODC Noord I’ve built the 8PB Ceph cluster running in a IPv6-only network. This talk will go in-depth in the design decisions made when building this cluster.
  • The OpenNebula Ceph Drivers by Jaime Melis from OpenNebula Systems.
    Overview of the Ceph Drivers. Configuration attributes, peculiarities. Everything you should know about before deploying your OpenNebula + Ceph cloud.
  • BIT’s experiences playing with Ceph and OpenNebula by Stefan Kooman from BIT.nl
    BIT has been running a Ceph test cluster for some time and will talk about their experiences so far. A live demo is planned where we will test Ceph’s ability to recover from failure.

Join this TechDay to learn about OpenNebula, the Cloud, Ceph and benefit from the expertise of the speakers!

fujitsu

bit_internet_tech_logo

Ceph_Logo_Standard_RGB_120411_fa

 

 

More information
Register for this Event

Providing Enterprise-Grade Infrastructure In The Cloud Age

CipherSpace Logo

 

 

 

The “Cloud” Need Not Be A Commodity

CipherSpace has a heritage of providing managed ICT infrastructure and services to clients who need customized “enterprise-quality” solutions on small to medium-sized business scales. Historically this demanded versatility, so that we could both provide each client with the right solutions for them and also with the expertise and judgment to guide them in the choice of their best solutions. As a consequence, we have used many approaches to virtualization over the years: FreeBSD Jails, Parallels, VMWare ESX/ESXi, Citrix XenServer, and Linux KVM, as well as working with others in non-production testing and evaluation. Because we have diverse clients with services in technically close proximity to each other, security has always been a critical focus of our work. We also have a strong preference for open source software, based not only on the obvious cost advantage but also for its adaptability and more transparent (and historically better) security. We’re not OSS purists in any sense, but openness is an important feature of many solutions we offer, as is the availability of commercial support for critical instances of critical tools, both for ourselves and our customers.

It became clear to us in 2011 that “Cloud” architectures were maturing in a way which made it imperative for us to create our own suite of solutions that would serve our clients differently and better than what they could get from the commodity sector. Read more

NodeWeaver to Sponsor OpenNebulaConf 2016

OpenNebula Conf 2016 is getting closer and we would like to keep sharing with you the companies/projects that are sponsoring this year’s conference. Now it is time for NodeWeaver, as part of our Gold Sponsors.

nodeweaver-logo (1)
If you want to participate in OpenNebula Conf and meet NodeWeaver and other OpenNebula users, remember that you are still in time for getting a good price deal for tickets. Also, if your company is interested in sponsoring OpenNebulaConf 2016 there are still slots.

About NodeWeaver

NodeWeaver is a new kind of hyperconverged appliance-bringing together storage, networking and virtualization, completely self-managing and designed to be simple to operate and exceptionally reliable. We have chosen from the start to take advantage of the OpenNebula orchestrator as our main component, a choice driven by its simplicity, reliability and its open design and development process. Our choice has been demonstrated, over and over, an excellent one: OpenNebula simplicity has been extremely appreciated by our customers, and the modularity of its architecture substantially reduced the complexity of adapting and extending it. In three years of experimentation, and several mission critical deployments, OpenNebula has exceeded all our expectations.

Agenda for Upcoming Cloud TechDays in USA & Canada

We are pleased to announce the agenda for the upcoming Cloud Technology Days that will be held in April in USA and Canada:

They feature a 4-hour hands-on tutorial in which you will learn how to install and configure an OpenNebula Cloud from scratch, and presentations from community members and users.

The event in Dallas, TX is hosted by Improving, a complete IT services firm, offering training, consulting, recruiting, and project services. The event is also sponsored by Digital Ocean, a cloud infrastructure provider focused on simplifying web infrastructure for software developers. In addition to the hands-on tutorial that will be held in the morning we will have the following talks during the afternoon:

  • Javier Fontán from OpenNebula Systems will talk about Customizing Virtual Machine Images and Docker Machine and Swarm on OpenNebula

 

The event in Toronto, Canada is hosted by Canada151 Data Centers, which provides carrier neutral, highly redundant Colocation and Disaster Recovery services from their West Toronto facility.  The event is also sponsored by Solgenia, a Cloud technology provider specializing in SaaS, PaaS and IaaS solutions. In addition to the hands-on tutorial that will be held in the morning we will have the following talks during the afternoon:

  • Khoder Shamy from Fuze (formerly ThinkingPhones) will share how they leverage OpenNebula open-source project in the rapidly growing global private infrastructure at Fuze
  • Javier Fontán from OpenNebula Systems will talk about Customizing Virtual Machine Images and Docker Machine and Swarm on OpenNebula
  • Presentation about Canada151 Data Centers
  • Varadarajan Narayanan from Wayz Infratek will talk about Hyperconvergence and OpenNebula
  • Presentation about Solgenia Private Cloud and Data Centers

Throughout the day, Canada151 will be conducting tours of the data center for interested parties.

 

The event in Cambridge, MA is hosted by Harvard Research Computing, which was established in 2007 as part of the Faculty of Arts & Sciences (FAS) Division of Science in Harvard University with the founding principle of facilitating the advancement of complex research by providing leading edge computing services. In addition to the hands-on tutorial that will be held in the morning we will have the following talks during the afternoon:

  • John Noss from Research Computing at Harvard will talk about their experience with OpenNebula in Research Computing
  • Dan Kelleher from Research Computing at Harvard will talk abouth the Corona Project
  • Roy Keene from Knight Point will talk about OpenNebula: The Integrator’s Story
  • Jaime Melis from OpenNebula Systems will talk about Docker Machine and Swarm on OpenNebula
  • Javier Fontán from OpenNebula Systems will talk about Customizing Virtual Machine Images

 

Many thanks to Improving, DigitalOceanCanada151 Data Centers, Solgenia and Research Computing at Harvard University for hosting and sponsoring these events.

Looking forward to seeing you there, these are three community events you cannot miss! Seats are limited, register ASAP!

OpenNebula Dashboards with Graphite and Grafana

We at TeleData are operating several OpenNebula instances in our datacenters and offer public Cloud services like IaaS, PaaS and scalable Hosting and ISP-Services for our business customers in Germany.

A professional 24/7 monitoring our whole infrastructure is a substantially element for our customers. In this case technical solutions have to offer the ability to drill down the hierarchy to identify bottlenecks and do some crucial capacity management. Every OpenNebula user has to know his workload and has to plan his resources. At TeleData we use a N+1 redundancy to cover host errors and so on.

With this strategy you cannot allocate 100% of your available ressources and you have to monitor your current allocations. Also the possibility to identify your most loaded VMM hosts or your most utilized vDCs could be a valuable information to do system and resource management the right way.

In this little guide we will try to get some monitoring data in form of graphite metrics for further, deeper and last but not least, long-term monitoring.

We think it was quite easy to pull out the information of OpenNebula. Therefore we developed a little script which collects the wanted metrics and sends them to Graphite.

First a small diagram for a quick overview:

Diagram OpenNebula - Graphite - Grafana

 

Lets start.
(This guide assumes that you are already familiar with Graphite and Grafana)

 

  1. Gathering the wanted Information
    As advised in the OpenNebula Forum [1] we used the XML-Output of the common tools “onehost” and “onegroup” in the OpenNebula Controller/Daemon, executed by Cronjob every 5 Minutes. The script is Ruby based and uses common Gems like “nokogiri” for XML operations.
  2.  Send out metrics to Graphite
    The script generates Graphite metrics out of the desired information and pushes them to your Graphite server. Graphite stores the metrics according to your storage schema (the Syntax is Frequency:Retention)
  3.  Display / Graph your metrics
    With your time series data in Graphite you could use shiny tools like „Grafana“ [2] or „Dashing“ [3] to create informative and quite impressive Dashboards for your OpenNebula OPS Team. With some templating (included in our JSON-Exports)  you can unchain the power of Grafana.

See some examples:

 

All work is published at Github: one-graphite
If there are any questions or issues feel free to add your comment at the OpenNebula Forum [1].

In our opinion OpenNebula works like a charm. It is open and flexible. OpenNebula is one of the most mature, comprehensive and valuable Cloud-Stack available in the market. We mentioned that before: http://opennebula.org/opennebula-at-teledata/

Because of the lack of complexity it`s simple to enhance and to add features.
Of course our Graphite cluster is also powered by OpenNebula, like many other Services.

Have fun!

 

Links:

[1] https://forum.opennebula.org/t/long-term-statistics-and-capacity-management-for-opennebula-clouds/1886/3
[2] http://grafana.org/
[3] http://shopify.github.io/dashing/

LXC Containers for OpenNebula

Operating-system-level virtualization, a new technology that has recently emerged and is being accepted into cloud infrastructures, has the advantage of providing better performance and scalability than other virtualization technologies such as Hardware-Assisted Virtual Machine (HVM).

containersLinuX Containers (LXC) allow the usage of this technology by creating containers that resemble complete isolated Linux virtual machines on the physical Linux machine, all this by sharing the kernel with the virtual portion of the system. A container is a virtual environment, with its own process and network space. LXC makes use of Linux kernel Control Groups and Namespaces to provide the isolation. Containers have their own view of the OS, the process ID space, the file system structure, and the network’s interfaces. Since they use kernel features, and there’s no emulation of hardware at all, the impact on performance is minimal. Starting up and shutting down containers, as well as creating or destroying them are fairly quick operations. There have been comparative studies, such as LXD vs KVM by Canonical, which show advantages of LXD systems over KVM. LXD is built on top of LXC and uses the same kernel’s features, so performance should be the same.

hyp2Nowadays public cloud Infrastructure as a Service (IaaS) providers, like Amazon, only offer application based on Docker containers deployed on virtual machines. Docker is designed to package a single application with all of its dependencies into a standardized unit for software development but not to create a virtual machine. Only a few systems offer IaaS on bare metal container infrastructures. Joyent, for instance, is able to put to use all of the virtualization advantages that OS provides.

However, on private cloud scenarios OS virtualization technology hasn’t had quite the acceptance it should. Private cloud managers, like OpenStack and Eucalyptus, don’t offer the support needed for this type of technology. OpenNebula, is a flexible cloud manager, which has gained very good reputation over the last few years. Therefore, to strengthen the OS virtualization support in this cloud manager could be a key strategic decision.

This is why LXCoNe was created. It is a virtualization and monitoring driver for OpenNebula that comes as an add-on to provide OpenNebula with the ability to deploy LXC containers. It contributes to achieve better interoperability, performance and scalability in OpenNebula clouds. Right now, the driver is stable and ready for its release. It is currently being used in the data center from Instituto Superior Politécnico José Antonio Echeverría in Cuba, with great results. The team is still working on adding some more features, which are shown next, to improve the driver.

Features and Limitations

The driver developed has several features such as:

  • Deployment of containers on File-Systems, Logical Volume Managers (LVM) and CEPH.
  • Attachment and detachment of network interface cards and disks, both before creating the container or while it’s on.
  • Monitoring containers and node’s resources usage.
  • Powering off, suspending, stoping, un-deploying and rebooting running containers.
  • Supports VNC.
  • Supports snapshots when using File-Systems.
  • Limits container’s RAM usage.

It lacks the following features, on which we are currently working:

  • Container’s CPU usage limitation.
  • Containers live migration.

Virtualization Solutions

OpenNebula was designed to be completely independent from underlying technologies. When the project started the only supported hypervisors were Xen and KVM, it was not thought out to support OS-level virtualization. This probably influenced the way OpenNebula managed physical and virtual resources. Because of this and due to the difference between the two types of virtualization technologies there are a few things to keep in mind when using the driver. These are:

Disks

When you successfully attach a hard drive, this will appear inside the container, in /media/<Disk_ID>. For detaching the hard drive it must be placed inside the container in the same path previously explained. It cannot be in use, otherwise it will purposely fail.

Network interfaces (NIC)

If you hot-attach a NIC, it will appear inside the container, but without any configurations. It will be up and ready to be used, contrary to what happens when you specify NICs in the template and then create the virtual machine. In this case, the NIC will appear set up and ready to be used, unless you specifically want it to appear otherwise.

Installation

Want to try? The drivers are part of the OpenNebula Add-on Catalog. Installation process is fully and simply explained in this guide.

Contributions, feedback and issues are very much welcome by interacting with us in the GitHub repository or writing a mail:

José Manuel de la Fé Herrero: jmdelafe92@gmail.com
Sergio Vega Gutiérrez: sergiojvg92@gmail.com

 

Docker Swarm with OpenNebula

Following our series of posts about OpenNebula and Docker, we would like to showcase the use of Docker Swarm with OpenNebula.

Docker Swarm is native clustering for Docker. With Docker Swarm you can aggregate a group of Docker Engines (running as Virtual Machines in OpenNebula in our case) into a single Virtual Docker Host. Docker Swarm delivers many advantages, like scheduling, high availability, etc.

We continue with our approach of using Docker Machine with the OpenNebula Driver plugin, in order to deploy Docker Engines easily and seamlessly. Please make sure to follow the previous post, in order to have a fully functional Docker Machine working with OpenNebula.

As displayed in the following image, Docker Swarm will make a cluster out of a collection of Docker Engine VMs deployed in OpenNebula with Docker Machine:

docker-swarm-opennebula

Docker Swarm makes use of a Discovery Service in order to implement the cluster communication and discovery. The Docker project provides a hosted discovery service, which is appropriate for testing use cases, however, in our case, we will use Docker Machine to deploy an instance of Consul, in particular this Docker container for Consul.

NOTE: This guide is specific for KVM, if you would like to try this plugin out with the vCenter hypervisor, or with vOneCloud, there are a few small differences, so make sure you read this: Docker Machine OpenNebula plugin with vCenter. In particular you will need to use the option –opennebula-template-* instead of –opennebula-image-*.

The first step is to deploy it using Docker Machine:


$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 consul
$ docker $(docker-machine config consul) run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
$ CONSUL_IP=$(docker-machine ip consul)

Once it’s deployed, we can deploy the Swarm master:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 --swarm --swarm-master --swarm-discovery="consul://$CONSUL_IP:8500" --engine-opt cluster-store=consul://$CONSUL_IP:8500 --engine-opt cluster-advertise="eth0:2376" swarm-master

And now deploy swarm nodes:

$ docker-machine create -d opennebula --opennebula-network-name private --opennebula-image-name boot2docker --opennebula-b2d-size 10240 --swarm --swarm-discovery="consul://$CONSUL_IP:8500"--engine-opt cluster-store=consul://$CONSUL_IP:8500 --engine-opt cluster-advertise="eth0:2376" swarm-node-01

You can repeat this for as many nodes as you want.

Finally, we can connect to the swarm like this:

$ eval $(docker-machine env --swarm swarm-master)
$ docker info
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 2
Server Version: swarm/1.1.2
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
swarm-master: 10.3.4.29:2376
└ Status: Healthy
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.18-boot2docker, operatingsystem=Boot2Docker 1.10.2 (TCL 6.4.1); master : 611be10 - Tue Feb 23 00:06:40 UTC 2016, provider=opennebula, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-29T16:08:41Z
swarm-node-01: 10.3.4.30:2376
└ Status: Healthy
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.18-boot2docker, operatingsystem=Boot2Docker 1.10.2 (TCL 6.4.1); master : 611be10 - Tue Feb 23 00:06:40 UTC 2016, provider=opennebula, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-02-29T16:08:12Z
Plugins:
Volume:
Network:
Kernel Version: 4.1.18-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.043 GiB
Name: swarm-master

The options cluster-store and cluster-advertise are necessary to create multi-host networks with overlay driver within a swarm cluster.

Once the swarm cluster is running, we can create a network with the overlay driver

$ docker network create --driver overlay --subnet=10.0.1.0/24 overlay_net

and then we check if the network is running

$ docker network ls

In order to test the network, we can run an nginx server on the swarm-master

$ docker run -itd --name=web --net=overlay_net --env="constraint:node==swarm-master" nginx

and get the contents of the nginx server’s home page from a container deployed on another cluster node

docker run -it --rm --net=overlay_net --env="constraint:node==swarm-node-01" busybox wget -O- http://web

As you can see, thanks to the Docker Machine OpenNebula Driver plugin, you can deploy a real, production ready swarm in a matter of minutes.

In the next post, we will show you how to use OneFlow to provide your Docker Swarm with automatic elasticity. Stay tuned!