A new appliance in the Marketplace:  Kubernetes = K8s

We are happy to announce a new addition to our steadily growing OpenNebula’s Marketplace. This time we are bringing to you the most popular container orchestration platform – Kubernetes. As with the previously introduced appliances (you can read more about them in our previous blogpost), our Kubernetes appliance too gives you a “press-of-a-button” simple option of how to create and deploy a functional service.

In the past, Kubernetes was notoriously hard to setup, that is the reason why projects like Rancher sprung up…(Do you want it as a future appliance? Let us know!) We also tried to make the creation of K8s clusters much simpler for you. The appliance supports multiple contextualization parameters to bend to your needs and to your required configuration. This works in very much the same spirit as all the other ONE service appliances.

On top of this, we extended the simplicity and versatility of this appliance’s usage with OneFlow service support, which makes perfect sense for Kubernetes clusters. Now you can deploy a whole K8s multi-node cluster with just one click. More info can be found in our Service Kubernetes documentation.

Kubernetes, Docker, microservices, containers and all those other trendy cloud technologies and terminologies can become confusing sometimes and not everyone is fully-versed in these new topics. (Did you hear about DevOps and CI / CD?) So we better clarify what exactly our Kubernetes appliance does for you and what it doesn’t.

This service appliance provides you with a K8s cluster (one master node and arbitrary number of worker nodes – including zero). Every node is just a regular VM, with which you are familiar. OpenNebula does NOT manage containers or pods inside a created K8s cluster. When you deploy this service appliance, you get a K8s cluster which exposes the Kubernetes API (on a designated ip address of the master node). You can access it via kubectl or UI dashboard (the picture below) to create pods, deployments, services etc. You can also add more nodes to the cluster any time later using the contextualization. But other than that, you are in charge and it is up to you to keep it up and running.

Have a look in the OpenNebula Marketplace.

Check out the video screencast of how to get started with the k8s appliance

 

OpenNebula Edge – Maintenance release v.5.8.1 is now available!

There’s plenty to be excited about with 5.8 Edge – and now we have released a maintenance release v.5.8.1, with bug fixes and a set of new minor features, which include:

  • Add timepicker in relative scheduled actions
  • Check vCenter cluster health in monitoring
  • Implemented nested filters AND and OR when filtering from CLI
  • Added input for command to be executed in the LXD container through a VNC terminal
  • Updated ceph requirements for LXD setups
  • Extended logs in LXD actions with the native container log
  • New API call one_vmpool_infoextended
  • Added sunstone banner official support

Check the release notes for the complete set of new features and bug fixes.

Relevant Links

OpenNebula Systems has just announced the availability of vOneCloud version 3.4.

vOneCloud 3.4 is powered by OpenNebula 5.8 “Edge”, and, as such, includes functionalities present in Edge relevant to vOneCloud:

  • Change boot order of VM devices updating the VM Template. More info here.
  • VM migration between clusters and datastores is now supported, check here.
  • Migrate images from KVM to vCenter, or vice versa. More info here.
  • New configuration file, default behaviour in the process of image importation can be changed. More info here.
  • VM actions can be specified relative to the VM start scheduled actions, for example: terminate this VM after a month of being created.
  • Automatic selection of Virtual Networks for VM NICs, balance network usage at deployment time or reduce clutter in your VM Template list. More info here.
  • New self-provisioning model for networks, Virtual Network Templates. Users can now instantiate their own virtual networks from predefined templates with their own addressing.
  • Support for NIC Alias, VM’s can have more than one IP associated to the same network interface. More info here.

Multiple bugfixes and documentation improvements have been included in this version. The complete list of changes can be checked on the development portal.

vOneCloud 3.4 has been certified with support for vSphere 6.0, 6.5 and 6.7.

If you are looking for additional details about OpenNebula integration with VMware, check out the recently published VMware Solution Brief, as well as this OpenNebula-VMware blog post.

Relevant Links:

 

One of OpenNebula’s main features is its low resource footprint. This allows OpenNebula clouds to grow massive without a big impact on demanded hardware. There is a continuous effort from the team behind OpenNebula’s development related to efficiency and performance, and several improvements in this area have been included in the latest release, OpenNebula 5.8 “Edge”. The objective of this blog post is to describe the scalability testing performed to define the scale limits of a single OpenNebula instance (single zone). This testing and some recommendations to tune your deployment are described in the new guide of the OpenNebula documentation Scalability Testing and Tuning.

Scalability for OpenNebula can be limited on the server side, in terms of maximum amount of nodes/Virtual Machines (VM) in a single zone, and on the nodes side, in terms of maximum amount of VMs a single node is able to handle. In the first case, OpenNebula’s core defines the scale limit, while in the second case, it is the monitoring daemon (collectd) client. A set of tests has been designed to address both cases. The general recommendation is to have no more than 2,500 servers and 10,000 VMs, as well as 30 API load req/s, managed by a single instance. Better performance and higher scalability can be achieved with specific tuning of other components like the DB, using better hardware or adding a proxy server. In any case, to grow the size of your cloud beyond these limits, you can horizontally scale your cloud by adding new OpenNebula zones within a federated deployment. Currently, the largest OpenNebula deployment consists of 16 data center and 300,000 cores.

Environment Setup

Hardware used for tests was a Packet t1.small.x86 bare metal cloud instance. No optimization or extra configuration besides defaults was used for OpenNebula. Hardware specifications are described as follows:

CPU model: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz, 4 cores, no HT
RAM: 8GB, DDR3, 1600 MT/s, single channel
HDD: INTEL SSDSC2BB150G7
OS: Ubuntu 18.04
OpenNebula: Version 5.8
Database: MariaDB v10.1 with default configurations
Hypervisor: Libvirt (4.0), Qemu (2.11), lxd (3.03)

Front-end (oned core) Testing

This is the main OpenNebula service, which orchestrates all the pools in the cloud (vms, hosts, vnets, users, groups, etc).

A single OpenNebula zone was configured for this test with the following parameters:

Number of hosts: 2,500
Number of VMs: 10,000
Average VM template size: 7KBytes

Note: Although hosts and VMs used were dummies they represent an identical entry on the DB compared to a real host/VM with a template size of 7KBytes. For this reason, results should be the same as in a real scenario with similar parameters.

The four most common API calls were used to stress the core at the same time in approximately the same ratio experienced on real deployments. Total amount of API calls per second used were: 10, 20 and 30. In these conditions, with a host monitoring interval of 20 hosts/second, in a pool with 2,500 hosts and a monitoring period on each host of 125 seconds, the response times in seconds of the oned process for the most common XMLRPC calls are shown below:

Response Time (seconds)
API Call – ratio: API Load: 10 req/s API Load: 20 req/s API Load: 30 req/s
host.info (30%) 0.06 0.50 0.54
hostpool.info (10%) 0.14 0.41 0.43
vm.info (30%) 0.07 0.51 0.57
vmpool.info (30%) 1.23 2.13 4.18

Host (monitoring client) Testing

This test stresses the monitoring probes in charge of querying the state, consumption, possible crashes, etc. of both physical hypervisors and virtual machines.

For this test, virtual instances were deployed incrementally. Monitoring client was executed each time that 20 new virtual instances were successfully launched and before launching 20 additional virtual machines to measure the time needed to monitor every virtual instance. This process was repeated until the node ran out of allocated resources, which happened at 250 virtual instances, and OpenNebula’s scheduler was not able to deploy more instances. Two monitoring drivers were tested: KVM and LXD. These are the settings for each KVM and LXD instance deployed:

Virtual Instances OS RAM CPU
KVM VMs None (empty disk) 32MB 0.1
LXD containers Alpine 3.8 32MB 0.1

Results for each driver are as follows:

Monitoring Driver Monitor time per virtual instance
KVM IM 0.42 seconds
LXD IM 0.1 seconds

 

Since we founded the OpenNebula open-source project more than 10 years ago, we have been following the Contributor License Agreement (CLA) mechanism for software contributions that include new functionality and intellectual contributions to the software. Although CLA has been the industry standard for open source contributions to other projects, it’s largely unpopular with developers.

In order to remove barriers to contribution and allow everyone to contribute, the OpenNebula project has adopted a mechanism known as a Developer Certificate of Origin (DCO) to manage the contribution process. The DCO is a legally binding statement that asserts that you are the creator of your contribution, and that you wish to allow OpenNebula to use your work.

The text of the DCO is fairly simple and available from developercertificate.org. Acknowledgement of this permission is done by using a sign-off process in Git. The sign-off is a simple line at the end of the explanation for the patch. More info here:

https://github.com/OpenNebula/one/wiki/Sign-Your-Work

We are looking forward to your valuable contributions!

 

OpenNebula Cloud Management on VMware vCenter

Companies’ data centers continue to grow, handling new and larger workloads, and ultimately making virtual infrastructure and cloud computing a “no-brainer”.  For those companies that have invested in VMware platform solutions, it shouldn’t be news that OpenNebula provides a comprehensive and affordable solution for managing one’s VMware infrastructure and creating a multi-tenant cloud environment. Full integration and support with VMware vCenter infrastructure has been a cornerstone feature of OpenNebula. And when questions like “How do I effectively turn my vSphere environment into a private cloud?” or “How can I best manage multiple data centers?“, “Is there an easier way to manage provisioning and to control compute workloads?“, or “How can I take advantage of public cloud offerings and seamlessly integrate them with my private, on-premises cloud?“, users with already established VMware infrastructure need to know that OpenNebula provides an inclusive, yet simple, set of capabilities for Virtual Data Center Management and Cloud Orchestration.

This OpenNebula-VMware Solution Brief provides an overview of the long-standing integration.

The highlights include:

  • OpenNebula offers a simple, lightweight orchestration layer that amplifies the management capabilities of one’s VMware infrastructure.
  • It delivers provisioning, elasticity and multi-tenancy cloud features including
    • virtual data center provisioning
    • data center federation
    • hybrid cloud capabilities to connect in-house infrastructures with public cloud resources
  • Distributed collections of vCenter instances across multiple data centers can be managed by a single instance of OpenNebula.
  • Public cloud resources from AWS and Microsoft Azure can be easily integrated into one’s OpenNebula cloud and managed like any of other private cloud resource.
  • And with the validation of OpenNebula on VMware Cloud on AWS, one can grow his or her on-premises infrastructure on-demand with remote vSphere-based cloud resources running on VMware Cloud on AWS, just as one could do with local VMware infrastructure resources. All this, in a matter of minutes.

The compatibility and features that OpenNebula offers to VMware users have been fundamental elements to our software solution for a long time running.  However, that doesn’t make it any less exciting to “spread the word”!

 

 

One of the biggest features in the recent OpenNebula 5.8 Edge release is, no doubt, the support for Linux containers (LXD) – which we already covered in our blog.

If you are tempted to give it a try, go ahead, it’s really simple! You can start in AWS with the common Ubuntu 18.04 image and the whole setup from start to finish won’t take you more than a matter of minutes.

The minimal recommended size is perhaps t2.medium.  Just give it at least 25GB disk space and allow access to the 9869 TCP where the WebUI is running.

Then it comes to the simple deployment for which you can download miniONE

wget https://github.com/OpenNebula/minione/releases/download/v5.8.0/minione

grant execution permission to the tool

chmod u+x minione

and deploy the OpenNebula with pre-configured LXD environment just by running

sudo minione --lxd

When it’s done, you can follow the MiniONE guide try-out section to launch your first containers. “miniONE” prepares one image and template for you – Centos7 – KVM, but no worries about the name as it works also for LXD. Also, the virtual network is exactly the same – no differences at all. The scheduler just checks what available hosts (hypervisors) there are and decides what to launch. And as we run miniONE with the –lxd parameter, the LXD host will be configured.

Follow along step-by-step in the following screencast video:

  OpenNebula 5.8 – Install with LXD containers in minutes using miniONE

Feel free to check other images from the OpenNebula Marketplace, or you can also create an additional Marketplace with https://images.linuxcontainers.org/ backend which contains plenty of upstream LXD containers.

Give it a shot, and share your feedback!

LXD has recently become the next-generation system container manager in Linux. While building on top of the low level LXC, it clearly improves the container orchestration, making administration easier and adding the management of tasks like container migration and the publishing of container images. 

In the realm of cloud computing, system container management solutions have yet to reach the widespread popularity of application container solutions, primarily due to the fact that there is little to no integration with neither private and public cloud management platforms, nor with Kubernetes. But OpenNebula 5.8 “Edge” complements the lack of automation in LXD as a standalone hypervisor and opens up a new set of use cases, especially for large deployments.

When looking at LXD containers as an option for your virtualized infrastructure, and comparing them to “full-fledged” hypervisors, you will see many benefits – the main ones starting with:

  • a smaller space footprint and smaller memory
  • lack of virtualized hardware
  • faster workloads
  • faster deployment times 

What do you get with OpenNebula and LXD integration?

It’s great to be able to deploy and utilize these lightweight and versatile LXD containers in your virtual infrastructure.  But the real fireworks start to go off when you contemplate what you’ll get when running OpenNebula on your LXD infrastructure!

As with KVM hypervisors, OpenNebula 5.8 integration with LXD provides advanced features for capacity management, resource optimisation, business continuity, and high availability, offering you complete and comprehensive control over your physical and virtual resources. On top of that, you can manage the provisioning of virtual data centers, creating completely elastic and multi-tenant cloud environments, all from within the simple Sunstone GUI or the available CLI’s. And where you may want to maintain the flexibility of creating a heterogeneous multi-hypervisor environment – clusters of LXD containers alongside clusters of other hypervisors – OpenNebula will manage those resources seamlessly all within the same cloud.

From a compatibility perspective, OpenNebula 5.8 and LXD provides the following:

  • Supported storage backends are filesystems with raw and qcow2 devices, and ceph with rbd images. As a result, LXD drivers can use regular KVM images.
  • The native network stack is fully compatible.
  • The LXD drivers support scenarios with installations both from apt and snap packages. There is also a dedicated marketplace for LXD which is backed by the public image server on https://images.linuxcontainers.org/ where you have access to every officially supported containerized distribution. 

Remember, LXD containers are only suitable for Linux, and share the kernel of the host OS. Also, LXD drivers still lack some functionalities like snapshotting and live migration.  So, being able to create a heterogeneous OpenNebula cloud using both LXD and KVM, wherever necessary, brings the best of both worlds.

OpenNebula 5.8 is “worth writing home about”, and LXD support is certainly one key reason why!

v.5.8 “Edge” is ready!

OpenNebula 5.8 “Edge” is the fifth major release of the OpenNebula 5 series. As you will have seen in recent communications around the “beta” releases, we have focused on introducing enhanced features on the solid base of 5.6 Blue Flash, while highlighting several Edge-focused features to bring the processing power of VMs closer to the consumers, and to dramatically reduce latency. As outlined earlier, 5.8 Edge comes with the following major features:

  • Support for LXD. This enables low resource container orchestration. LXD containers are ideal to run on low consumption devices closer to the customers.
  • Automatic NIC selection. This enhancement of the OpenNebula scheduler will alleviate the burden of VM/container Template management in edge environments where the remote hosts can be potentially heterogeneous, with different network configurations.
  • Distributed Data Centers. This feature is key for the edge cloud. OpenNebula now offers the ability to use bare metal providers to build remote clusters in a breeze, without needing to change the workload nature. We are confident that this is a killer feature that sets OpenNebula apart from the direct competitors in the space.
  • Scalability improvements. Orchestrating an edge cloud will be demanding in terms of the number of VMs, containers and hypervisors to manage. OpenNebula 5.8 brings to the table a myriad of improvements to the monitoring, pool management and GUI, to deliver a smooth user experience in large scale environments.

In perfect alignment with the Edge Nebula, the aim of OpenNebula 5.8, is to provide computing power on a wide geographic surface to offer services closer to customers, building a cloud managed from a single portal over very thin infrastructure.

The OpenNebula project would like to thank the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

OpenNebula 5.8 Edge is considered to be a stable release, and as such an update is available in production environments.

Relevant Links

OpenNebula-based Edge Platform to be presented in 2019 Mobile World Congress

With OpenNebula as a core component, CORD (Central Office Re-architectured as a Data Center) will be featured in Telefónica’s Edge Computing demos at the Mobile World Congress in Barcelona, Spain from February 25-28. Stop by Telefonica’s booth (Hall 3, Stand 3K31) to see the new generation of Central Offices that are fully IPv6 compliant and allow for the deployment of programmable services rather than the traditional black box solutions provided by proprietary solutions.

Telefónica’s CORD prototype aims to meet low-latency demands of the emerging Internet of Things ecosystem and to virtualize the access network and give third-party IoT application developers and content providers cloud-computing capabilities at the network edge.

You can find more details surrounding the solution in this Open CORD blog.
Below are some video presentations given by Telefónica on how OpenNebula forms a key element of their innovative solution: