A new appliance in the Marketplace:  Kubernetes = K8s

We are happy to announce a new addition to our steadily growing OpenNebula’s Marketplace. This time we are bringing to you the most popular container orchestration platform – Kubernetes. As with the previously introduced appliances (you can read more about them in our previous blogpost), our Kubernetes appliance too gives you a “press-of-a-button” simple option of how to create and deploy a functional service.

In the past, Kubernetes was notoriously hard to setup, that is the reason why projects like Rancher sprung up…(Do you want it as a future appliance? Let us know!) We also tried to make the creation of K8s clusters much simpler for you. The appliance supports multiple contextualization parameters to bend to your needs and to your required configuration. This works in very much the same spirit as all the other ONE service appliances.

On top of this, we extended the simplicity and versatility of this appliance’s usage with OneFlow service support, which makes perfect sense for Kubernetes clusters. Now you can deploy a whole K8s multi-node cluster with just one click. More info can be found in our Service Kubernetes documentation.

Kubernetes, Docker, microservices, containers and all those other trendy cloud technologies and terminologies can become confusing sometimes and not everyone is fully-versed in these new topics. (Did you hear about DevOps and CI / CD?) So we better clarify what exactly our Kubernetes appliance does for you and what it doesn’t.

This service appliance provides you with a K8s cluster (one master node and arbitrary number of worker nodes – including zero). Every node is just a regular VM, with which you are familiar. OpenNebula does NOT manage containers or pods inside a created K8s cluster. When you deploy this service appliance, you get a K8s cluster which exposes the Kubernetes API (on a designated ip address of the master node). You can access it via kubectl or UI dashboard (the picture below) to create pods, deployments, services etc. You can also add more nodes to the cluster any time later using the contextualization. But other than that, you are in charge and it is up to you to keep it up and running.

Have a look in the OpenNebula Marketplace.

Check out the video screencast of how to get started with the k8s appliance

 

OpenNebula Edge – Maintenance release v.5.8.1 is now available!

There’s plenty to be excited about with 5.8 Edge – and now we have released a maintenance release v.5.8.1, with bug fixes and a set of new minor features, which include:

  • Add timepicker in relative scheduled actions
  • Check vCenter cluster health in monitoring
  • Implemented nested filters AND and OR when filtering from CLI
  • Added input for command to be executed in the LXD container through a VNC terminal
  • Updated ceph requirements for LXD setups
  • Extended logs in LXD actions with the native container log
  • New API call one_vmpool_infoextended
  • Added sunstone banner official support

Check the release notes for the complete set of new features and bug fixes.

Relevant Links

OpenNebula Systems has just announced the availability of vOneCloud version 3.4.

vOneCloud 3.4 is powered by OpenNebula 5.8 “Edge”, and, as such, includes functionalities present in Edge relevant to vOneCloud:

  • Change boot order of VM devices updating the VM Template. More info here.
  • VM migration between clusters and datastores is now supported, check here.
  • Migrate images from KVM to vCenter, or vice versa. More info here.
  • New configuration file, default behaviour in the process of image importation can be changed. More info here.
  • VM actions can be specified relative to the VM start scheduled actions, for example: terminate this VM after a month of being created.
  • Automatic selection of Virtual Networks for VM NICs, balance network usage at deployment time or reduce clutter in your VM Template list. More info here.
  • New self-provisioning model for networks, Virtual Network Templates. Users can now instantiate their own virtual networks from predefined templates with their own addressing.
  • Support for NIC Alias, VM’s can have more than one IP associated to the same network interface. More info here.

Multiple bugfixes and documentation improvements have been included in this version. The complete list of changes can be checked on the development portal.

vOneCloud 3.4 has been certified with support for vSphere 6.0, 6.5 and 6.7.

If you are looking for additional details about OpenNebula integration with VMware, check out the recently published VMware Solution Brief, as well as this OpenNebula-VMware blog post.

Relevant Links:

 

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

In the month of March we have maintained a sharp focus on activities surrounding last month’s release of v.5.8 Edge. We have been working on bug fixes, and have an upcoming planned release of v.5.8.1 scheduled for early April.  At the same time, the internal discussions around “what’s to come” for an upcoming v.5.10 have already begun.

We did take the time to carry out a thorough Scalability Test and Tuning exercise to breakdown the details of how well OpenNebula 5.8 scales, and what are some of the recommendations for achieving optimal performance and scalability. For detailed info on Scalabilty Testing and Tuning, check out our reference documentation.

We also reviewed and posted how to take advantage of the OpenNebula 5.8 Edge with LXD support, and getting going with a quick installation using miniONE. This is a perfect way to test out v.5.8, starting with AWS and setting it up from start-to-finish in a few minutes.  You can check out the video screencast of the step-by-step instructions.

Community

OpenNebula’s partnership, and moreso its native integration, with VMware was highlighted in fine form this month. An OpenNebula-VMware Solution Brief was published on the VMware Solutions Exchange website. Additionally, we published a solution overview on the VMware Eco-System Partners site.

We also released the 2018 OpenNebula Survey results, which provides a comprehensive look into the development and progression of the OpenNebula project, the growth and evolution of its usage, and a glimpse into what the Community is looking for from OpenNebula in the future.

Lastly, we communicated our shift towards using a Developer Certificate of Origin (DCO) to manage the code contribution process. This allows us to move away from the Contributor License Agreement (CLA) mechanism which we have been using up until now, and to adopt a more universally  appealing approach with the DCO.

Outreach

As warmer weather approaches, (at least here in the Northern hemisphere), so does the OpenNebula TechDay season. Our first two TechDays of 2019 are approaching at the beginning of May:

  • May 8, 2019 – Barcelona, Spain – hosted by CSUC
  • May 16, 2019 – Sofia, Bulgaria – hosted by StorPool

Agendas are getting pulled together and will soon be published for your review!  Remember, these are FREE one-day events, laden with technical insight from users, Tutorials provided by OpenNebula Systems, and a great opportunity to network with the Community.

And don’t forget to plan ahead for OpenNebulaConf 2019 in Barcelona, Spain on October 21-22.

Stay connected!

One of OpenNebula’s main features is its low resource footprint. This allows OpenNebula clouds to grow massive without a big impact on demanded hardware. There is a continuous effort from the team behind OpenNebula’s development related to efficiency and performance, and several improvements in this area have been included in the latest release, OpenNebula 5.8 “Edge”. The objective of this blog post is to describe the scalability testing performed to define the scale limits of a single OpenNebula instance (single zone). This testing and some recommendations to tune your deployment are described in the new guide of the OpenNebula documentation Scalability Testing and Tuning.

Scalability for OpenNebula can be limited on the server side, in terms of maximum amount of nodes/Virtual Machines (VM) in a single zone, and on the nodes side, in terms of maximum amount of VMs a single node is able to handle. In the first case, OpenNebula’s core defines the scale limit, while in the second case, it is the monitoring daemon (collectd) client. A set of tests has been designed to address both cases. The general recommendation is to have no more than 2,500 servers and 10,000 VMs, as well as 30 API load req/s, managed by a single instance. Better performance and higher scalability can be achieved with specific tuning of other components like the DB, using better hardware or adding a proxy server. In any case, to grow the size of your cloud beyond these limits, you can horizontally scale your cloud by adding new OpenNebula zones within a federated deployment. Currently, the largest OpenNebula deployment consists of 16 data center and 300,000 cores.

Environment Setup

Hardware used for tests was a Packet t1.small.x86 bare metal cloud instance. No optimization or extra configuration besides defaults was used for OpenNebula. Hardware specifications are described as follows:

CPU model: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz, 4 cores, no HT
RAM: 8GB, DDR3, 1600 MT/s, single channel
HDD: INTEL SSDSC2BB150G7
OS: Ubuntu 18.04
OpenNebula: Version 5.8
Database: MariaDB v10.1 with default configurations
Hypervisor: Libvirt (4.0), Qemu (2.11), lxd (3.03)

Front-end (oned core) Testing

This is the main OpenNebula service, which orchestrates all the pools in the cloud (vms, hosts, vnets, users, groups, etc).

A single OpenNebula zone was configured for this test with the following parameters:

Number of hosts: 2,500
Number of VMs: 10,000
Average VM template size: 7KBytes

Note: Although hosts and VMs used were dummies they represent an identical entry on the DB compared to a real host/VM with a template size of 7KBytes. For this reason, results should be the same as in a real scenario with similar parameters.

The four most common API calls were used to stress the core at the same time in approximately the same ratio experienced on real deployments. Total amount of API calls per second used were: 10, 20 and 30. In these conditions, with a host monitoring interval of 20 hosts/second, in a pool with 2,500 hosts and a monitoring period on each host of 125 seconds, the response times in seconds of the oned process for the most common XMLRPC calls are shown below:

Response Time (seconds)
API Call – ratio: API Load: 10 req/s API Load: 20 req/s API Load: 30 req/s
host.info (30%) 0.06 0.50 0.54
hostpool.info (10%) 0.14 0.41 0.43
vm.info (30%) 0.07 0.51 0.57
vmpool.info (30%) 1.23 2.13 4.18

Host (monitoring client) Testing

This test stresses the monitoring probes in charge of querying the state, consumption, possible crashes, etc. of both physical hypervisors and virtual machines.

For this test, virtual instances were deployed incrementally. Monitoring client was executed each time that 20 new virtual instances were successfully launched and before launching 20 additional virtual machines to measure the time needed to monitor every virtual instance. This process was repeated until the node ran out of allocated resources, which happened at 250 virtual instances, and OpenNebula’s scheduler was not able to deploy more instances. Two monitoring drivers were tested: KVM and LXD. These are the settings for each KVM and LXD instance deployed:

Virtual Instances OS RAM CPU
KVM VMs None (empty disk) 32MB 0.1
LXD containers Alpine 3.8 32MB 0.1

Results for each driver are as follows:

Monitoring Driver Monitor time per virtual instance
KVM IM 0.42 seconds
LXD IM 0.1 seconds

 

Since we founded the OpenNebula open-source project more than 10 years ago, we have been following the Contributor License Agreement (CLA) mechanism for software contributions that include new functionality and intellectual contributions to the software. Although CLA has been the industry standard for open source contributions to other projects, it’s largely unpopular with developers.

In order to remove barriers to contribution and allow everyone to contribute, the OpenNebula project has adopted a mechanism known as a Developer Certificate of Origin (DCO) to manage the contribution process. The DCO is a legally binding statement that asserts that you are the creator of your contribution, and that you wish to allow OpenNebula to use your work.

The text of the DCO is fairly simple and available from developercertificate.org. Acknowledgement of this permission is done by using a sign-off process in Git. The sign-off is a simple line at the end of the explanation for the patch. More info here:

https://github.com/OpenNebula/one/wiki/Sign-Your-Work

We are looking forward to your valuable contributions!

 

OpenNebula Cloud Management on VMware vCenter

Companies’ data centers continue to grow, handling new and larger workloads, and ultimately making virtual infrastructure and cloud computing a “no-brainer”.  For those companies that have invested in VMware platform solutions, it shouldn’t be news that OpenNebula provides a comprehensive and affordable solution for managing one’s VMware infrastructure and creating a multi-tenant cloud environment. Full integration and support with VMware vCenter infrastructure has been a cornerstone feature of OpenNebula. And when questions like “How do I effectively turn my vSphere environment into a private cloud?” or “How can I best manage multiple data centers?“, “Is there an easier way to manage provisioning and to control compute workloads?“, or “How can I take advantage of public cloud offerings and seamlessly integrate them with my private, on-premises cloud?“, users with already established VMware infrastructure need to know that OpenNebula provides an inclusive, yet simple, set of capabilities for Virtual Data Center Management and Cloud Orchestration.

This OpenNebula-VMware Solution Brief provides an overview of the long-standing integration.

The highlights include:

  • OpenNebula offers a simple, lightweight orchestration layer that amplifies the management capabilities of one’s VMware infrastructure.
  • It delivers provisioning, elasticity and multi-tenancy cloud features including
    • virtual data center provisioning
    • data center federation
    • hybrid cloud capabilities to connect in-house infrastructures with public cloud resources
  • Distributed collections of vCenter instances across multiple data centers can be managed by a single instance of OpenNebula.
  • Public cloud resources from AWS and Microsoft Azure can be easily integrated into one’s OpenNebula cloud and managed like any of other private cloud resource.
  • And with the validation of OpenNebula on VMware Cloud on AWS, one can grow his or her on-premises infrastructure on-demand with remote vSphere-based cloud resources running on VMware Cloud on AWS, just as one could do with local VMware infrastructure resources. All this, in a matter of minutes.

The compatibility and features that OpenNebula offers to VMware users have been fundamental elements to our software solution for a long time running.  However, that doesn’t make it any less exciting to “spread the word”!

 

 

One of the biggest features in the recent OpenNebula 5.8 Edge release is, no doubt, the support for Linux containers (LXD) – which we already covered in our blog.

If you are tempted to give it a try, go ahead, it’s really simple! You can start in AWS with the common Ubuntu 18.04 image and the whole setup from start to finish won’t take you more than a matter of minutes.

The minimal recommended size is perhaps t2.medium.  Just give it at least 25GB disk space and allow access to the 9869 TCP where the WebUI is running.

Then it comes to the simple deployment for which you can download miniONE

wget https://github.com/OpenNebula/minione/releases/download/v5.8.0/minione

grant execution permission to the tool

chmod u+x minione

and deploy the OpenNebula with pre-configured LXD environment just by running

sudo minione --lxd

When it’s done, you can follow the MiniONE guide try-out section to launch your first containers. “miniONE” prepares one image and template for you – Centos7 – KVM, but no worries about the name as it works also for LXD. Also, the virtual network is exactly the same – no differences at all. The scheduler just checks what available hosts (hypervisors) there are and decides what to launch. And as we run miniONE with the –lxd parameter, the LXD host will be configured.

Follow along step-by-step in the following screencast video:

  OpenNebula 5.8 – Install with LXD containers in minutes using miniONE

Feel free to check other images from the OpenNebula Marketplace, or you can also create an additional Marketplace with https://images.linuxcontainers.org/ backend which contains plenty of upstream LXD containers.

Give it a shot, and share your feedback!

LXD has recently become the next-generation system container manager in Linux. While building on top of the low level LXC, it clearly improves the container orchestration, making administration easier and adding the management of tasks like container migration and the publishing of container images. 

In the realm of cloud computing, system container management solutions have yet to reach the widespread popularity of application container solutions, primarily due to the fact that there is little to no integration with neither private and public cloud management platforms, nor with Kubernetes. But OpenNebula 5.8 “Edge” complements the lack of automation in LXD as a standalone hypervisor and opens up a new set of use cases, especially for large deployments.

When looking at LXD containers as an option for your virtualized infrastructure, and comparing them to “full-fledged” hypervisors, you will see many benefits – the main ones starting with:

  • a smaller space footprint and smaller memory
  • lack of virtualized hardware
  • faster workloads
  • faster deployment times 

What do you get with OpenNebula and LXD integration?

It’s great to be able to deploy and utilize these lightweight and versatile LXD containers in your virtual infrastructure.  But the real fireworks start to go off when you contemplate what you’ll get when running OpenNebula on your LXD infrastructure!

As with KVM hypervisors, OpenNebula 5.8 integration with LXD provides advanced features for capacity management, resource optimisation, business continuity, and high availability, offering you complete and comprehensive control over your physical and virtual resources. On top of that, you can manage the provisioning of virtual data centers, creating completely elastic and multi-tenant cloud environments, all from within the simple Sunstone GUI or the available CLI’s. And where you may want to maintain the flexibility of creating a heterogeneous multi-hypervisor environment – clusters of LXD containers alongside clusters of other hypervisors – OpenNebula will manage those resources seamlessly all within the same cloud.

From a compatibility perspective, OpenNebula 5.8 and LXD provides the following:

  • Supported storage backends are filesystems with raw and qcow2 devices, and ceph with rbd images. As a result, LXD drivers can use regular KVM images.
  • The native network stack is fully compatible.
  • The LXD drivers support scenarios with installations both from apt and snap packages. There is also a dedicated marketplace for LXD which is backed by the public image server on https://images.linuxcontainers.org/ where you have access to every officially supported containerized distribution. 

Remember, LXD containers are only suitable for Linux, and share the kernel of the host OS. Also, LXD drivers still lack some functionalities like snapshotting and live migration.  So, being able to create a heterogeneous OpenNebula cloud using both LXD and KVM, wherever necessary, brings the best of both worlds.

OpenNebula 5.8 is “worth writing home about”, and LXD support is certainly one key reason why!

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

Yeah, February is a short month,…but it was jam-packed with activity.  This month we kept our collective “nose to the grindstone”, and released OpenNebula v.5.8 “Edge”!  Through months of focused development and several weeks of beta testing and bug fixes, we finally brought 5.8 “Edge” to market.  Now it is time for you all to get your hands on it and put it to the test.

You’ll see significant scalability improvements, as well as the introduction of key functionalities that certify its codename “Edge”.  Features like LXD container support, native provisioning of bare metal providers like Packet and AWS, and Automatic NIC selection will all make expanding your cloud infrastructure to the edge simple and efficient.  Read up on the details of the 5.8 version release.

And as part of the beta testing period this month, we introduced Beta Contextualization Packages – KVM images on our Marketplace with the pre-installed packages – to be able to easily import the appliances and give the beta versions a test.  In the end, easier testing translates to an easier release.

Community

OpenNebula, in partnership with Packet, is a proud initial program participant in their Edge Alliance Program.  This is a novel collaboration to provide edge infrastructure, technology partnerships, and expertise with the focus of creating a more fluid and available environment for Edge Computing practice and innovation.  The idea is to provide a springboard for open-source and commercial use cases “on the edge” and to hit the ground running.

Mobile World Congress 2019, one of the largest gatherings for the mobile industry where electronics and telecoms firms show off their latest innovations, just wrapped up in Barcelona.  While there was certainly plenty to see there, one of the highlighted presentations was given by Telefónica, in which they reviewed their prototype of an Open Access network “in a scenario of triple convergence of fixed, mobile, and edge computing” – a solution with OpenNebula at its core.  Great work, Telefónica!

And here’s one more shout-out to all of the Community members and users of OpenNebula who helped to get this latest software version developed, tested, and “out the door”.  Your support and cooperation is key to the success of OpenNebula.

Outreach

The schedule for OpenNebula TechDays has been finalized and published on our website.  Check your schedule, and see how you can attend one of these FREE events, hosted by enthusiastic partners of ours, to learn the ins-and-outs of OpenNebula and the details of the new version release:

  • May 8, 2019 – Barcelona, Spain – hosted by CSUC
  • May 16, 2019 – Sofia, Bulgaria – hosted by StorPool
  • June 11, 2019 – Cambridge, MA USA – hosted by OpenNebula Systems
  • September 11, 2019 – Frankfurt, Germany – hosted by Interactive Network and EuroCloud Germany
  • September 26, 2019 – Vienna, Austria – hosted by NTS

Last month we announced the details of our OpenNebula Conference 2019 in Barcelona, Spain on October 21-22, 2019.  Don’t forget that “Very Early Bird” pricing are available.

And as always, don’t forget to join our Developers’ Forum.  We saw a lot of interesting queries and questions posted throughout our various channels of communication (Twitter, Facebook, etc) this month.  The Developers’ Forum is the quintessential forum where you can learn about the latest talking points, what types of issues people are having, and how to resolve them.

Stay connected!