We’ve seen over the last several years the explosive value brought to the market of cloud computing, and the ever-growing shift toward establishing centralized data centers to support all scales of business processing. The cloud infrastructure of today has provided an extremely effective and economical platform for flexing with the persistent need for increased storage and computing for businesses. With the rapid growth of data, comes the corresponding growth for the need to process that data. Up until now, the modern paradigm has been to have the swift and agile ability to grow one’s data center to handle that growing need for processing power. Virtualized Data Centers and Cloud infrastructures have been foundational tools.
However, with the Internet of Things (IoT) and the forthcoming explosion of “everything connected”, we are seeing that the centralized Cloud infrastructure, on its own, will not be a silver bullet. These mobile devices, which ironically enough, we continue to call “phones”, continue to evolve, providing an ever-growing range of capabilities and a burgeoning power to compute and process. Homes, offices, public buildings, and automobiles are now collecting and generating huge amounts of data, which as we walk by with our phones, or drive by in our automobiles, we’ll have the need and expectation for a much more complete, and almost inherent, interaction. And this is where the current cloud model falls short.
As this explosion of connected data and IoT grows, and interactions between things need to almost mimic human-nature, the basic paradigm shifts from a need to scale, to a need for speed. The importance of latency in these types of “connected” interactions becomes paramount. And here is where we see bringing cloud capabilities closer to the consumer – closer to “the Edge” – as a developing model.
At OpenNebula Systems, we’ve focused, over the last decade, to bring a simple, yet flexible and comprehensive, Virtual Data Center and Cloud Management solution to the market – in OpenNebula. And as the demands have developed, and user needs have changed, we have continued to innovate. Within the last month, we have released the first version of a prototype solution with cloud disaggregation capabilities. This is the first step in our focus to integrate edge computing, while ultimately maintaining an integrated experience of cloud orchestration and resource management.
With this prototype, we have carried out a simple, but illustrative, use case, demonstrating the value that can be achieved by being able to “disaggregate” one’s cloud infrastructure – (for now, we have introduced support for both Packet and AWS EC2 bare-metal containers) – and bringing it closer to the user.
We assumed that a fictitious company, ACME Corporation, was located in Sacramento, California, where we instantiated an OpenNebula node, to emulate an on-premise private cloud for the company. The case here begins with ACME realizing that it is getting a lot of system traffic, not only within the California region, but also from users in France. And with OpenNebula and the newly introduced Host Provisioning capabilities, ACME Corporation can now:
- deploy new physical hosts on selected bare-metal cloud providers
- install and configure them as KVM hypervisors
- and add them into existing OpenNebula clusters as an independent host.
…all within minutes.
In terms of Host Provisioning, for this exercise, we utilized bare-metal containers from Packet. Here we deployed and configured two separate edge nodes – one in Los Angeles, California, and the other in Marseilles, France.
|Edge Node / Location||Deployment time||Configuration time|
|Node 1 – Los Angeles, CA||5 minutes||3 minutes|
|Node 2 – Marseilles, France||5 minutes||7 minutes|
Essentially, within a period of 8 minutes and 12 minutes, respectively, we were able to deploy and install two physical hosts on a physical, bare-metal container, and configure each of them as KVM hypervisors.
Then, the next step was to deploy a Virtual Machine. In this case, we utilized Alpine Linux virtual router appliances with a physical size of 71 MiB. (Deployment time takes into account the total time between the deploy order and the VM entering running state, without taking into account the initial image transfer time, which is required only the first time the VM is deployed on a new location.)
|Edge Node / Location||Deployment time||Image transfer time|
|Node 1 – Los Angeles, CA||1 seconds||3 seconds|
|Node 2 – Marseilles, France||9 seconds||15 seconds|
So, within a matter of a few minutes, ACME Corporation was able to deploy two separate virtual nodes – all controlled within the single, centrally-managed OpenNebula private cloud. And here is where the “rubber meets the road”. We then measured the latency across the nodes:
We measured latencies for the following situations to demonstrate the centralized cloud use case:
|Use Case||Infrastructure arrangement||Latency|
|User in Los Angeles, CA||Between the user and the on-premise cloud (node in Sacramento, CA)||12 milliseconds|
|User in Marseille, France||Between the user and the on-premise cloud (node in Sacramento, CA)||174 milliseconds|
We then measured latencies for the following disaggregated cloud infrastructure:
|Use Case||Infrastructure arrangement||Latency|
|User in Los Angeles, CA||Between the user and the edge (node in Los Angeles, CA)||9 milliseconds|
|User in Marseille, France||Between the user and the edge (node in Marseille, France)||10 milliseconds|
|User in Paris, France||Between the user and the edge (node in Marseille, France)||12 milliseconds|
The result it simple. By utilizing OpenNebula’s capability to easily provision a separate, fully functional node on a bare-metal container, such as Packet, that is geographically closer to the end-user, one can achieve a significant improvement in latency. In this case, ACME Corporation was able to reduce the latency for the user in France from 174 milliseconds to 10 milliseconds. And in the world with increased focus on connected data, gaming, and IoT, this will be more and more critical.
While this OpenNebula Host Provisioning prototype is an initial step in our focused development in Edge Computing and Disaggregated Clouds, OpenNebula Systems is also heavily involved in building out similar capability in its collaboration with the telecommunications giant, Telefónica, and their Central Office Re-architected as a Datacenter (CORD) initiative, called “OnLife”. Read here for additional details about Telefónica’s “OnLife” initiative.
Stay connected with developments at OpenNebula Systems. Don’t forget to join our Newsletter, or reach out to me directly (email@example.com) for any questions or suggestions. We maintain and nurture a strong Community of Users, and we’d love to hear your feedback and insight.