One of OpenNebula’s main features is its low resource footprint. This allows OpenNebula clouds to grow massive without a big impact on demanded hardware. There is a continuous effort from the team behind OpenNebula’s development related to efficiency and performance, and several improvements in this area have been included in the latest release, OpenNebula 5.8 “Edge”. The objective of this blog post is to describe the scalability testing performed to define the scale limits of a single OpenNebula instance (single zone). This testing and some recommendations to tune your deployment are described in the new guide of the OpenNebula documentation Scalability Testing and Tuning.

Scalability for OpenNebula can be limited on the server side, in terms of maximum amount of nodes/Virtual Machines (VM) in a single zone, and on the nodes side, in terms of maximum amount of VMs a single node is able to handle. In the first case, OpenNebula’s core defines the scale limit, while in the second case, it is the monitoring daemon (collectd) client. A set of tests has been designed to address both cases. The general recommendation is to have no more than 2,500 servers and 10,000 VMs, as well as 30 API load req/s, managed by a single instance. Better performance and higher scalability can be achieved with specific tuning of other components like the DB, using better hardware or adding a proxy server. In any case, to grow the size of your cloud beyond these limits, you can horizontally scale your cloud by adding new OpenNebula zones within a federated deployment. Currently, the largest OpenNebula deployment consists of 16 data center and 300,000 cores.

Environment Setup

Hardware used for tests was a Packet t1.small.x86 bare metal cloud instance. No optimization or extra configuration besides defaults was used for OpenNebula. Hardware specifications are described as follows:

CPU model: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz, 4 cores, no HT
RAM: 8GB, DDR3, 1600 MT/s, single channel
OS: Ubuntu 18.04
OpenNebula: Version 5.8
Database: MariaDB v10.1 with default configurations
Hypervisor: Libvirt (4.0), Qemu (2.11), lxd (3.03)

Front-end (oned core) Testing

This is the main OpenNebula service, which orchestrates all the pools in the cloud (vms, hosts, vnets, users, groups, etc).

A single OpenNebula zone was configured for this test with the following parameters:

Number of hosts: 2,500
Number of VMs: 10,000
Average VM template size: 7KBytes

Note: Although hosts and VMs used were dummies they represent an identical entry on the DB compared to a real host/VM with a template size of 7KBytes. For this reason, results should be the same as in a real scenario with similar parameters.

The four most common API calls were used to stress the core at the same time in approximately the same ratio experienced on real deployments. Total amount of API calls per second used were: 10, 20 and 30. In these conditions, with a host monitoring interval of 20 hosts/second, in a pool with 2,500 hosts and a monitoring period on each host of 125 seconds, the response times in seconds of the oned process for the most common XMLRPC calls are shown below:

Response Time (seconds)
API Call – ratio: API Load: 10 req/s API Load: 20 req/s API Load: 30 req/s (30%) 0.06 0.50 0.54 (10%) 0.14 0.41 0.43 (30%) 0.07 0.51 0.57 (30%) 1.23 2.13 4.18

Host (monitoring client) Testing

This test stresses the monitoring probes in charge of querying the state, consumption, possible crashes, etc. of both physical hypervisors and virtual machines.

For this test, virtual instances were deployed incrementally. Monitoring client was executed each time that 20 new virtual instances were successfully launched and before launching 20 additional virtual machines to measure the time needed to monitor every virtual instance. This process was repeated until the node ran out of allocated resources, which happened at 250 virtual instances, and OpenNebula’s scheduler was not able to deploy more instances. Two monitoring drivers were tested: KVM and LXD. These are the settings for each KVM and LXD instance deployed:

Virtual Instances OS RAM CPU
KVM VMs None (empty disk) 32MB 0.1
LXD containers Alpine 3.8 32MB 0.1

Results for each driver are as follows:

Monitoring Driver Monitor time per virtual instance
KVM IM 0.42 seconds
LXD IM 0.1 seconds


Since we founded the OpenNebula open-source project more than 10 years ago, we have been following the Contributor License Agreement (CLA) mechanism for software contributions that include new functionality and intellectual contributions to the software. Although CLA has been the industry standard for open source contributions to other projects, it’s largely unpopular with developers.

In order to remove barriers to contribution and allow everyone to contribute, the OpenNebula project has adopted a mechanism known as a Developer Certificate of Origin (DCO) to manage the contribution process. The DCO is a legally binding statement that asserts that you are the creator of your contribution, and that you wish to allow OpenNebula to use your work.

The text of the DCO is fairly simple and available from Acknowledgement of this permission is done by using a sign-off process in Git. The sign-off is a simple line at the end of the explanation for the patch. More info here:

We are looking forward to your valuable contributions!


OpenNebula Cloud Management on VMware vCenter

Companies’ data centers continue to grow, handling new and larger workloads, and ultimately making virtual infrastructure and cloud computing a “no-brainer”.  For those companies that have invested in VMware platform solutions, it shouldn’t be news that OpenNebula provides a comprehensive and affordable solution for managing one’s VMware infrastructure and creating a multi-tenant cloud environment. Full integration and support with VMware vCenter infrastructure has been a cornerstone feature of OpenNebula. And when questions like “How do I effectively turn my vSphere environment into a private cloud?” or “How can I best manage multiple data centers?“, “Is there an easier way to manage provisioning and to control compute workloads?“, or “How can I take advantage of public cloud offerings and seamlessly integrate them with my private, on-premises cloud?“, users with already established VMware infrastructure need to know that OpenNebula provides an inclusive, yet simple, set of capabilities for Virtual Data Center Management and Cloud Orchestration.

This OpenNebula-VMware Solution Brief provides an overview of the long-standing integration.

The highlights include:

  • OpenNebula offers a simple, lightweight orchestration layer that amplifies the management capabilities of one’s VMware infrastructure.
  • It delivers provisioning, elasticity and multi-tenancy cloud features including
    • virtual data center provisioning
    • data center federation
    • hybrid cloud capabilities to connect in-house infrastructures with public cloud resources
  • Distributed collections of vCenter instances across multiple data centers can be managed by a single instance of OpenNebula.
  • Public cloud resources from AWS and Microsoft Azure can be easily integrated into one’s OpenNebula cloud and managed like any of other private cloud resource.
  • And with the validation of OpenNebula on VMware Cloud on AWS, one can grow his or her on-premises infrastructure on-demand with remote vSphere-based cloud resources running on VMware Cloud on AWS, just as one could do with local VMware infrastructure resources. All this, in a matter of minutes.

The compatibility and features that OpenNebula offers to VMware users have been fundamental elements to our software solution for a long time running.  However, that doesn’t make it any less exciting to “spread the word”!



Executive Summary

We’d like to thank all of you who shared your perspective on OpenNebula as part of our 2018 Architecture Survey. This is the fourth architectural survey of OpenNebula since 2012, and the results were collected during the period of December 4, 2018 through January 11, 2019. Your participation here is fundamental to our strategic focus to best provide features and support that align with the infrastructures platforms and configurations demanded by you.

We have only included in the analysis the respondents that are using OpenNebula 5.x (latest series) and who we deem reliable because they have provided identification details that allow us to verify the answers of the survey. This is important given that our main aim is to have accurate and useful information about OpenNebula deployments. This survey is not a market survey and does not express all OpenNebula deployments worldwide.

The data provided helps to shed light on how OpenNebula is being used by the community, as well as providing some indicators of where to aim for the future. In comparing to our previous survey taken in 2015, there are some other notable findings:

  • The types of workloads handled by OpenNebula is evolving, with a growing proportion (85%) of them being run in a Production environment, while also having others running in Dev/Test (73%) and in a Proof of Concept mode (29%). OpenNebula shows its increasing maturity, compared with 73% of deployments in production reported in our previous survey.
  • A few other indicators of its developing stability are both the Number of Users supported by OpenNebula clouds, as well as the Number of Nodes.  For each of these metrics, there has been steady growth across the board.
    • The growth in user count for OpenNebula clouds has progressively increased with now close to 20% servicing more than 1,000 users.  Smaller OpenNebula clouds, of 100 users or less, now constitute 45%, which is down from 70% in 2015.
    • And “node count” has seen steady growth as well, with 77% of cloud environments having more than 10 nodes, (up from 56% in 2015). Similarly, we see that 29% of cloud environments have more than 100 nodes, where in 2015 that figure was 20%.
  • Usage of OpenNebula in Commercial and Industrial enterprises continues to grow.  Its employment in organizations across various industries shows a growing distribution with increasing usage in IT, Telecommunications and Internet, and Hosting and MSP’s, and Media and Gaming sectors – mounting to 75% of total usage.  And while still an important sector of OpenNebula usage, Academia and Research constitutes a smaller percentage of the overall – now 14%, down from 25% in our last survey.
  • Hybrid cloud usage continues to grow, with an increasing percentage of OpenNebula integration with AWS (39%), as well as with Microsoft Azure (22%).
  • With the advent of Configuration Management and Remote Execution tools, we see that a hefty segment of OpenNebula users (73%) are taking advantage of these to introduce automation to their environments.
  • A growing usage of CentOS as an Operating System has reached 50% across our survey participants, while usage of both Ubuntu and Debian have remained fairly steady at 42% and 22% respectively.

And OpenNebula continues in its aim to be the simplest and most flexible open source solution for private cloud and data center virtualization management.  Our user community rates it highly because of its Simplicity (83%), its Openness (69%) and Flexibility (56%), as well as its Vendor-neutrality (52%).

Thank you again for participating in our survey! Below you can review the detailed results.

A.  About the Organization

This year we broke out the demographics across multiple categories, with “Information Technology”, “Cloud Hosting and MSP’s”, and “Telco and Internet” companies assuming large portions of the demographics – but those added together with “Media and Gaming” and “Web, SaaS, and eCommerce” total up to a growing 75% of Industrial/Commercial users. 14% of users work in Science, R&D, and Academia, while just under 11% work for Government and Non-Profits.

Type of Organization


Usage by larger companies has steadily grown over time, now reaching 20% of companies with 5,000 employees or more, compared to 13% from our last survey.  While smaller companies – (of 500 employees or less) – continue to be avid users of OpenNebula, that demographic constitutes a smaller base of 58%, compared to 64% from our last survey.


Size of Organization (# of employees)


48% of deployments are located in Europe or in Russia, a slight shift downward from 50% in our last survey. We also see a small tick upward in growth within North America to 33% from 30%. These two geographic regions continue to lead the usage of OpenNebula by a large margin.

Geographic Region


B. About Cloud Usage

The usage of OpenNebula for Production workloads has seen steady growth to 85%, from 73% in 2015. Its use for Development and Testing has remained firm at 73%, while there is significant Proof of Concept work being done, as well (19%).

Type of Workload (allows for multiple selections)


It is logical that the “On-premise private cloud” remains the most common type of cloud being built with OpenNebula, with 78% of respondents confirming it being part of their cloud infrastructure.  However, again the widening types of clouds being created is evident with 54% of users venturing into “Hybrid private clouds”, 55% creating “Federated private clouds”, and 17% and 26% creating “Hosted private clouds” and “Distributed private clouds”, respectively.  Even 8% are beginning to venture in creating “Edge private clouds”. With the imminent release of v.5.8 Edge, it will be interesting to see how that last figure grows.

Type of Cloud Architecture (allows for multiple selections)


And from a use case perspective, 71% of organizations use OpenNebula for “Data Center Virtualization management”, and 48% use it to establish “Public Clouds, VPS and MSPs”.  39% of users take advantage of OpenNebula to create cloud environments on top of their VMware infrastructure, and 37% are laying the groundwork for Enterprise clouds.

Type of Use Case (allows for multiple selections)


Since 2015, the number of users in most OpenNebula clouds has seen significant growth.  Clouds with more than 1,000 users have reached 19% from 8%, while the smaller ones of 100 users or less now constitute 45%, from 70% in 2015.

Number of Users


Another new question we introduced this year had to do with understanding with which other Cloud providers users interact.  Amazon Web Services (AWS) is, by far, the most common at 61%, with Microsoft Azure following at 35%.

Interaction with Other Cloud providers (allows for multiple selections)


C. About Cloud Configuration

59% of OpenNebula environments are “federated”, meaning that they have more than a single zone, and 9% are running more than 10 different zones.  This is up from 51% and 5%, respectively, from the last time we collected data.

Number of Zones (OpenNebula instances)


Node count – another metric in measuring cloud size – has steadily increased, as well.  77% of cloud environments have more than 10 nodes, which is a significant increase from 2015 when this measure was 56%.  And currently 29% of cloud environments have more than 100 nodes, where in 2015 that figure was 20%.

Number of Nodes


KVM hypervisors remain the most commonly used hypervisors in OpenNebula environments. Yet, usage with VMware hypervisor continues to be a very solid use-case. The percentage of KVM users has stayed fairly steady, at 75% compared to 73% previously.  And usage with VMware hypervisors has increased slightly, at 39% compared to 37%.

Hypervisor usage (allows for multiple selections)


56% of users have implemented some form of a “hybrid cloud” in their environment. And the most popular Public cloud providers used in tandem with OpenNebula are Amazon Web Services (AWS) at 39% and Microsoft Azure at 22%.  These two were most popular in our last survey, as well, but have increased slightly, where they were 30% and 16% respectively.

Public Clouds used for Hybrid (allows for multiple selections)


Shared datastores at 53% and Ceph at 40% remain the most widely used storage solutions in open environments – while Shared datastore usage dropping slightly from 60%, Ceph has remained the same at 40%. VMware FS at 38% is used in VMware-based deployments, mainly through vCenter, and dropped slightly from 40%.

Storage configurations (allows for multiple selections)


The most common network configuration is still the Standard Bridged network configuration (40%). 802.1Q VLAN (31%) and Open vSwitch (25%) remain widely-used choices as well, while there has been some movement to VXLAN and OpenSwitch VXLAN with 14% and 10% respectively.  And for VMware deployments, we see a steady usage of VMware networking (37%), primarily used through vCenter.

Networking configurations (allows for multiple selections)


Authentication practices, while shifting slightly to using external authentication systems, have remained fairly static.  The majority of deployments use the built-in User/Password authentication (68%), and LDAP, and Active Directory have gained slightly more traction since the last survey with 22%, and 21% usage respectively. SSH, while widely used (42%) has seen a slight drop from 50%.

Authentication configurations (allows for multiple selections)


CentOS and Ubuntu are still the most popular Linux distributions for creating OpenNebula clouds with usage at 48% and 41% respectively, which is a slight upward movement from 44% and 40%.  Debian has remained steady at 22% usage.

Operating Systems (allows for multiple selections)


A new question was introduced to understand what types of configuration management systems are employed to take advantage of automation.  The most common platform is Ansible, with 50% usage. Puppet is another popularly used tool, at 29%. There are a few other tools used with some regularity, like Chef (8%) and SaltStack (6%). And 27% of respondents state that they do not use any configuration management system at all.

Configuration Management (allows for multiple selections)


Similar to the types of configuration management systems, we asked which tools are used to deploy OpenNebula, and the same set of tools lined up in a similar fashion. Ansible is the most-commonly used with 40% usage, and Puppet is used by 21%.  Tools like Chef (5%) and SaltStack (4%) have a small user base, while 41% of respondents state that they do not use any deployment tools.

Deployment tools (allows for multiple selections)


Another question asked inquired about Container or PaaS tools being used to manage applications.  While a large part of the respondents (48%) state that they are not yet using these types of tools, for those that are using these types of tools, the most commonly used is Kubernetes with 25% usage. Docker Swarm (14%) and OpenShift (10%) follow in popularity, while 9% of respondents state they are building their own solutions.

Container / PaaS for Application Management (allows for multiple selections)


When asking about which Advanced Components are used or planning to be used, a large selection of components were confirmed.  The most commonly selected was “High Availability” at 65%. Other popular components are “Application Containerization” (47%), “Data Center Federation” (43%), and “OneFlow” (42%).  A few others are listed, while 14% state that they are not using or interested in using any advanced components, just yet.

Advanced Components (allows for multiple selections)


From a provisioning interface perspective, a large majority of OpenNebula users are taking advantage of the Sunstone interface (82% of users). The CLI and API both have a fairly extensive usage across the community with 41% and 39% respectively, and the Cloud View (Self-Service portal) has 27% of usage.

Provisioning Interfaces (allows for multiple selections)


We inquired about which OpenNebula API’s are being used or planned to be used in the near future, and curiously, a very large constituent (45%) has shown interest in being able to soon utilize Python bindings when v.5.8 is released. Of other API’s currently available at the time of the survey, use or intended use of Ruby bindings is common with 26%, while JAVA and Go bindings follow with 16% and 13% respectively.  29% of users state no current interest in using OpenNebula API’s.

OpenNebula API’s (allows for multiple selections)


And lastly, as seems to be the case for many years running, “Stability”, “Flexibility”, and “Openness” continue to be the top reasons why users choose OpenNebula.

Why OpenNebula? (allows for multiple selections)


Thank you again!

Stay connected!


One of the biggest features in the recent OpenNebula 5.8 Edge release is, no doubt, the support for Linux containers (LXD) – which we already covered in our blog.

If you are tempted to give it a try, go ahead, it’s really simple! You can start in AWS with the common Ubuntu 18.04 image and the whole setup from start to finish won’t take you more than a matter of minutes.

The minimal recommended size is perhaps t2.medium.  Just give it at least 25GB disk space and allow access to the 9869 TCP where the WebUI is running.

Then it comes to the simple deployment for which you can download miniONE


grant execution permission to the tool

chmod u+x minione

and deploy the OpenNebula with pre-configured LXD environment just by running

sudo minione --lxd

When it’s done, you can follow the MiniONE guide try-out section to launch your first containers. “miniONE” prepares one image and template for you – Centos7 – KVM, but no worries about the name as it works also for LXD. Also, the virtual network is exactly the same – no differences at all. The scheduler just checks what available hosts (hypervisors) there are and decides what to launch. And as we run miniONE with the –lxd parameter, the LXD host will be configured.

Follow along step-by-step in the following screencast video:

  OpenNebula 5.8 – Install with LXD containers in minutes using miniONE

Feel free to check other images from the OpenNebula Marketplace, or you can also create an additional Marketplace with backend which contains plenty of upstream LXD containers.

Give it a shot, and share your feedback!

LXD has recently become the next-generation system container manager in Linux. While building on top of the low level LXC, it clearly improves the container orchestration, making administration easier and adding the management of tasks like container migration and the publishing of container images. 

In the realm of cloud computing, system container management solutions have yet to reach the widespread popularity of application container solutions, primarily due to the fact that there is little to no integration with neither private and public cloud management platforms, nor with Kubernetes. But OpenNebula 5.8 “Edge” complements the lack of automation in LXD as a standalone hypervisor and opens up a new set of use cases, especially for large deployments.

When looking at LXD containers as an option for your virtualized infrastructure, and comparing them to “full-fledged” hypervisors, you will see many benefits – the main ones starting with:

  • a smaller space footprint and smaller memory
  • lack of virtualized hardware
  • faster workloads
  • faster deployment times 

What do you get with OpenNebula and LXD integration?

It’s great to be able to deploy and utilize these lightweight and versatile LXD containers in your virtual infrastructure.  But the real fireworks start to go off when you contemplate what you’ll get when running OpenNebula on your LXD infrastructure!

As with KVM hypervisors, OpenNebula 5.8 integration with LXD provides advanced features for capacity management, resource optimisation, business continuity, and high availability, offering you complete and comprehensive control over your physical and virtual resources. On top of that, you can manage the provisioning of virtual data centers, creating completely elastic and multi-tenant cloud environments, all from within the simple Sunstone GUI or the available CLI’s. And where you may want to maintain the flexibility of creating a heterogeneous multi-hypervisor environment – clusters of LXD containers alongside clusters of other hypervisors – OpenNebula will manage those resources seamlessly all within the same cloud.

From a compatibility perspective, OpenNebula 5.8 and LXD provides the following:

  • Supported storage backends are filesystems with raw and qcow2 devices, and ceph with rbd images. As a result, LXD drivers can use regular KVM images.
  • The native network stack is fully compatible.
  • The LXD drivers support scenarios with installations both from apt and snap packages. There is also a dedicated marketplace for LXD which is backed by the public image server on where you have access to every officially supported containerized distribution. 

Remember, LXD containers are only suitable for Linux, and share the kernel of the host OS. Also, LXD drivers still lack some functionalities like snapshotting and live migration.  So, being able to create a heterogeneous OpenNebula cloud using both LXD and KVM, wherever necessary, brings the best of both worlds.

OpenNebula 5.8 is “worth writing home about”, and LXD support is certainly one key reason why!

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.


Yeah, February is a short month,…but it was jam-packed with activity.  This month we kept our collective “nose to the grindstone”, and released OpenNebula v.5.8 “Edge”!  Through months of focused development and several weeks of beta testing and bug fixes, we finally brought 5.8 “Edge” to market.  Now it is time for you all to get your hands on it and put it to the test.

You’ll see significant scalability improvements, as well as the introduction of key functionalities that certify its codename “Edge”.  Features like LXD container support, native provisioning of bare metal providers like Packet and AWS, and Automatic NIC selection will all make expanding your cloud infrastructure to the edge simple and efficient.  Read up on the details of the 5.8 version release.

And as part of the beta testing period this month, we introduced Beta Contextualization Packages – KVM images on our Marketplace with the pre-installed packages – to be able to easily import the appliances and give the beta versions a test.  In the end, easier testing translates to an easier release.


OpenNebula, in partnership with Packet, is a proud initial program participant in their Edge Alliance Program.  This is a novel collaboration to provide edge infrastructure, technology partnerships, and expertise with the focus of creating a more fluid and available environment for Edge Computing practice and innovation.  The idea is to provide a springboard for open-source and commercial use cases “on the edge” and to hit the ground running.

Mobile World Congress 2019, one of the largest gatherings for the mobile industry where electronics and telecoms firms show off their latest innovations, just wrapped up in Barcelona.  While there was certainly plenty to see there, one of the highlighted presentations was given by Telefónica, in which they reviewed their prototype of an Open Access network “in a scenario of triple convergence of fixed, mobile, and edge computing” – a solution with OpenNebula at its core.  Great work, Telefónica!

And here’s one more shout-out to all of the Community members and users of OpenNebula who helped to get this latest software version developed, tested, and “out the door”.  Your support and cooperation is key to the success of OpenNebula.


The schedule for OpenNebula TechDays has been finalized and published on our website.  Check your schedule, and see how you can attend one of these FREE events, hosted by enthusiastic partners of ours, to learn the ins-and-outs of OpenNebula and the details of the new version release:

  • May 8, 2019 – Barcelona, Spain – hosted by CSUC
  • May 16, 2019 – Sofia, Bulgaria – hosted by StorPool
  • June 11, 2019 – Cambridge, MA USA – hosted by OpenNebula Systems
  • September 11, 2019 – Frankfurt, Germany – hosted by Interactive Network and EuroCloud Germany
  • September 26, 2019 – Vienna, Austria – hosted by NTS

Last month we announced the details of our OpenNebula Conference 2019 in Barcelona, Spain on October 21-22, 2019.  Don’t forget that “Very Early Bird” pricing are available.

And as always, don’t forget to join our Developers’ Forum.  We saw a lot of interesting queries and questions posted throughout our various channels of communication (Twitter, Facebook, etc) this month.  The Developers’ Forum is the quintessential forum where you can learn about the latest talking points, what types of issues people are having, and how to resolve them.

Stay connected!

v.5.8 “Edge” is ready!

OpenNebula 5.8 “Edge” is the fifth major release of the OpenNebula 5 series. As you will have seen in recent communications around the “beta” releases, we have focused on introducing enhanced features on the solid base of 5.6 Blue Flash, while highlighting several Edge-focused features to bring the processing power of VMs closer to the consumers, and to dramatically reduce latency. As outlined earlier, 5.8 Edge comes with the following major features:

  • Support for LXD. This enables low resource container orchestration. LXD containers are ideal to run on low consumption devices closer to the customers.
  • Automatic NIC selection. This enhancement of the OpenNebula scheduler will alleviate the burden of VM/container Template management in edge environments where the remote hosts can be potentially heterogeneous, with different network configurations.
  • Distributed Data Centers. This feature is key for the edge cloud. OpenNebula now offers the ability to use bare metal providers to build remote clusters in a breeze, without needing to change the workload nature. We are confident that this is a killer feature that sets OpenNebula apart from the direct competitors in the space.
  • Scalability improvements. Orchestrating an edge cloud will be demanding in terms of the number of VMs, containers and hypervisors to manage. OpenNebula 5.8 brings to the table a myriad of improvements to the monitoring, pool management and GUI, to deliver a smooth user experience in large scale environments.

In perfect alignment with the Edge Nebula, the aim of OpenNebula 5.8, is to provide computing power on a wide geographic surface to offer services closer to customers, building a cloud managed from a single portal over very thin infrastructure.

The OpenNebula project would like to thank the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

OpenNebula 5.8 Edge is considered to be a stable release, and as such an update is available in production environments.

Relevant Links

Packet has announced its Edge Alliance Program, in which OpenNebula is one of the Initial Program participants.  This collaboration has been picking up steam over the past year, as focus is taking shape on Edge computing, and both platforms see a natural synergy to provide innovative solutions.  OpenNebula is just minutes away from its new version release of 5.8 “Edge”, which among other edge-focused capabilities, like providing native support for lightweight LXD containers and Automatic NIC selection, offers the ability to use bare metal providers (like Packet) to build remote clusters and to easily create Disaggregated Data Center environments along the Edge.

Packet’s Edge Alliance Program is a bold step toward encouraging innovation and providing “free access to edge computing building blocks”.  The newly announced availability of two edge sites in Chicago (IL), and a site separately deployed near Gillette Stadium in Foxborough (MA) is just the beginning of their goal to launch 15 separate site locations in 2019. With the perfect timing of OpenNebula’s 5.8 Edge release, it will become second-nature to provision Packet bare-metal resources within your cloud.

Check out more details about Packet’s Edge Alliance Program, and get engaged!

And read here for more details on OpenNebula’s partnership with Packet.

OpenNebula-based Edge Platform to be presented in 2019 Mobile World Congress

With OpenNebula as a core component, CORD (Central Office Re-architectured as a Data Center) will be featured in Telefónica’s Edge Computing demos at the Mobile World Congress in Barcelona, Spain from February 25-28. Stop by Telefonica’s booth (Hall 3, Stand 3K31) to see the new generation of Central Offices that are fully IPv6 compliant and allow for the deployment of programmable services rather than the traditional black box solutions provided by proprietary solutions.

Telefónica’s CORD prototype aims to meet low-latency demands of the emerging Internet of Things ecosystem and to virtualize the access network and give third-party IoT application developers and content providers cloud-computing capabilities at the network edge.

You can find more details surrounding the solution in this Open CORD blog.
Below are some video presentations given by Telefónica on how OpenNebula forms a key element of their innovative solution: