As you may already know, the OpenNebula Conference 2019 is scheduled to take place on October 21-22 in Barcelona, Spain.  If you haven’t registered yet, “Early Bird – 20% off” pricing is still available.

Thanks to all of those who have submitted presentation proposals for this year’s event.  Due to several requests to extend the deadline, we will be accepting applications until Tuesday, May 21st.

And remember that, in addition to being able to get in front of your colleagues and peers in the community, it is a great opportunity to seek feedback and insight, and network in an engaging and supportive environment.  You will also get 50% off your registration, as well as 50% off an additional registration for a companion.

We would love to see you in Barcelona!

We have released a new, “refreshed” version of the OpenNebula Marketplace, with a cleaner, more efficient look-and-feel.  Pre-set filters are available for quick searches, while you can also search by keyword, as well. What’s even better is that the Marketplace will adapt to any device which you use – whether on a phone, a tablet, or a computer.

There are a lot of important components available to you on the Marketplace. And now it will be that much easier for you to explore and use.

 

We are sending out a huge thanks to CSUC for hosting the OpenNebula TechDay in Barcelona this past Wednesday.  It is always exciting to meet up with folks in the community, willing to share their insights and innovations, to talk about their specific use cases, and to inquire about how we can create better, more efficient solutions.  Huawei was there, as well, able to share some insight into some of the innovations they are making in ICT Infrastructure, while also sponsoring the event.  Many thanks!

We had a great agenda lined up. You can check out the presentations here:

Keep your sights squared on upcoming OpenNebula TechDays.

Raúl Sánchez from Rancher Labs.

The curious crown getting their questions answered.

Who we are

ARGO-ICT delivers managed IT services and solutions based on open technologies. Ranging from managing a single application to a fully featured private (on-premises) cloud environment.

We focus on delivering a solution that demands the least time possible of the engineers managing it, but also provides clear and traceable logs if something goes wrong and the freedom to make it fit if it doesn’t.

OpenNebula enabled us to deliver these solutions and had a lot more features we could only wish for.

Why we use OpenNebula

The environments of our customers differ a lot. From self-servicing customers to managed customers with environments consisting of virtual machines, kubernetes clusters to complete private cloud deployments. We needed a solution to support it all.

And that is exactly what OpenNebula provides in one functional package, without the need to integrate tens of components with a lot of custom work and bug fixing.

The web interface delivers our self-servicing customers an easy to use portal where they can manage their own environment. OpenNebula’s chargeback feature enabled us to make usage based billing easier than ever.

For managing customer’s environments, it provides us with all the CLI tools, API’s and drivers needed to have the freedom to approach the environment as we like and integrate smoothly.

How we use OpenNebula

We focus on automation and simplicity. To achieve this we use Ansible for configuration management. From the OpenNebula cloud environment itself to our customer environments running on it, we use Ansible to ensure the environment is according to our and our customers needs. We therefore make happily use of the OpenNebula Ansible module.

Maximizing availability using High-Availability setups for our instances was no pain at all to setup using the clearly defined Hooks in OpenNebula. Depending on the state of an instance or host, we can trigger an action. If a compute node all of a sudden goes down, all the instances are migrated to an operational compute node in a matter of seconds.

In order to provide real High-Availability a good storage solution is one of the key parts to prevent data lost and ensure the instance can always reach its own data. Our datastores use Ceph as RBD storage for all of our instances. OpenNebula made it easier then ever to setup using their Ceph RBD drivers, which just works like a charm.

To make things even simpler, we started using the OpenNebula docker-machine driver to enable our container orchestration platform, Rancher, to provision it’s own environments within OpenNebula. This enables us to provide fully operational kubernetes clusters in minutes, with just a few keyboard strokes or an API call.

Never forget the community

Exactly this, the possibility to write a blogpost, perfectly reflects the welcoming community behind the project. But also the rich, easy to follow documentation (with some basic background knowledge), helps a lot during the process.

In summary

OpenNebula is definitely our choice for a cloud management platform, because it is one complete package which contains everything you need without integrating or extensively modifying modules. Thanks to it’s easy to use API’s, Drivers and Hooks, we have a highly available platform that integrates nicely with our configuration management and container orchestration management systems. Almost out of the box.

We have released a new website – ONEedge.io – to highlight OpenNebula’s evolving focus in the domain of edge computing. With workloads becoming more data-intensive and the need for low latency mounting rapidly, OpenNebula’s offerings are directly meeting the needs to take your private cloud infrastructure and easily distribute it globally, in proximity to users, to use various hosting resources, including bare metal providers, and to providing a flexible platform to integrate with other technologies.

Are you looking for a simple way to bring compute resources closer to your end users? Check out ONEedge.io, an enterprise solution with OpenNebula at its core, to bring your private cloud to the edge.

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

Spring is the time of plans and projects.” – Leo Tolstoy

In this month of April, we had plenty on our plate, and the prospect of continued plans and projects certainly continues.  An unwavering effort was put in this month, firstly, to wrap up a good portion of the open issues in v.5.8 Edge, and to include some additional minor enhancements, ultimately culminating in the announcement of the v.5.8.1 Maintenance release.

Likewise, with respect to VMware integration, we released the availability of vOneCloud v.3.4.0.  This OpenNebula distribution, optimised specifically for VMware vCenter deployments, is powered by v.5.8 Edge, and thereby, includes several of the edge-focused capabilities that were made available previously with the OpenNebula Edge release.

Another exciting announcement during this month of April was the release of the Kubernetes appliance in the OpenNebula Marketplace. While the Marketplace continues to show sustained growth, this new K8s appliance is one which will be attracting a lot of attention – allowing to deploy a whole K8s multi-node cluster with just one click.

Community

This month OpenNebula Systems, in collaboration with Packet, carried out a dazzling use-case to show just how well-equipped you can be in reducing latencies and creating distributed edge clouds using OpenNebula v.5.8.  We deployed and configured a distributed edge cloud using bare metal resources provided by Packet, across 17 different data centers around the world, and established a global infrastructure to support a simulated video game release.  This distributed edge cloud was deployed and configured in just 25 minutes, and costing less than $12/hour.  In addition to the detailed article outlining the simplicity and value that is offered by being able to create such a distributed architecture, we also published a video screencast of the exercise.

Outreach

With springtime here, and summer just around the corner, before you know it, October will be upon us and the OpenNebulaConf 2019 will be ready to kick off.  Consequently, this month we released our Call for Presentations for this year’s annual OpenNebula event.  The Call for Presentations will remain open until May 10th, 2019. We urge you all to take a moment to think about what you could propose as a presentation. Take advantage of this great opportunity to share your insight and experiences with the user community.  We’d love to have you present with us in Barcelona this October!

The TechDay season is ramping up, as well!  We, along with our partners at CSUC and StorPool, have been promoting the upcoming TechDay events in a couple weeks’ time.

These are already looking to be very well attended.  Remember, these are free one-day events, so don’t hesitate to sign up, and join us and other community members for an informative and exciting day “in the clouds”!

Stay connected!

Launching a distributed gaming cloud across 17 global locations in just 25 minutes, for little more than pocket change!

While most of today’s organizations are taking advantage of the long list of benefits offered by cloud computing, the growth of data-intensive workloads and the benefits of lower latency highlight the need to move beyond a simplistic “centralized cloud” approach. As businesses and applications serve global, mobile audiences the benefits of distributed infrastructure are becoming apparent to an expanding list of use cases.  

One clear example is gaming and entertainment.  

The growth of immersive and interactive gaming and media experiences are pushing the threshold on many levels. Just ask any twelve year-old Fortnite player, and you’ll learn all about the importance of latency and jitter – and how frustrating it is when these experiences don’t translate the way they want. The same story is being repeated across a huge range of industries and verticals, from industrial IoT and office productivity, to mobility, store automation, live entertainment or even real-time, AI-powered diagnosis of medical data.

While some of these problems are due to congestion or “last mile” issues, we’re mainly running into a simple physics problem: the speed of light!  Moving compute closer to the user dramatically changes the equation for what is possible. Enter, edge computing!

OpenNebula at the Edge

While OpenNebula has consistently provided a simple and stable platform for managing private cloud infrastructures, whether on-premises, hosted, federated, or hybrid, the new OpenNebula version 5.8 “Edge” is flexing its muscles by bringing key capabilities to create and manage highly distributed cloud infrastructures.

The ability to distribute workloads used to be the domain of only the largest websites and applications.  However, more and more organizations are looking to respond to their global users, and the public cloud has helped to lower the barrier to doing this rapidly and affordable.

But what if you’re not solely a public cloud user?  Most enterprises have diverse infrastructure, including private and hybrid clouds.  As such, being able to expand private clouds to distributed dedicated infrastructure – for instance to address the ever-growing need for low latency – is of increasing value to our users.

With our latest release, OpenNebula provides the ability to expand a cloud by instantiating hosts and clusters using bare metal resources from providers like Packet or even Amazon Web Services (AWS now offers a line of bare metal compute instances).  With a single command, users can deploy and configure new clusters using these bare metal providers in locations around the world.

Moreover, OpenNebula provides the ability to grow or reduce the size of one’s cloud infrastructure based on the active demands of the system.

A Real World Example: Gaming

To showcase OpenNebula’s capabilities, what we are going to review here is the use case of a gaming company releasing their new video game to a global audience. As they establish their OpenNebula private cloud, this (fictional) gaming company is very aware of key requirements for their platform.

  1. The broad majority of video gaming now happens on mobile devices or devices connected over WiFi. Hence, user experience will correlate very closely to response time and latency between the cloud resources and the users’ consoles. So being able to deploy their gaming services as close as possible is key.
  2. In order to meet fluctuating demands, they will need to manage their distributed private cloud environment with speed and flexibility. This means being able to dynamically grow or shrink one’s cloud infrastructure according to real time needs, creating new cloud resources where needed and scaling these resources according to dynamic user demands.  
  3. Finally, the platform needs to be highly scalable – being able to manage large-scale, highly distributed cloud infrastructures. The infrastructure cannot be limited to a single host location, but rather have the flexibility to scale in size, across multiple locations across the globe.

In this particular case, you’ll see how OpenNebula works with bare metal from Packet to provide the perfect building blocks for this use case, both at launch time and beyond.

Edge Cloud Infrastructure

Packet is a bare metal resource provider with the focus of bringing the experience of the cloud to physical infrastructure, regardless of what it is and where it resides.

The five year old company, which is backed by names just as Softbank, Dell Technologies and Samsung, manage tens of thousands of physical servers, built by a dozen different manufacturers, across three architectures and over 20 global facilities, and support 15+ official operating systems.  An important thing to note for this example is that a large percentage of Packet’s users deploy with their Custom iPXE feature, which allows each customer to bring their own OS image.

With its focus on bare metal, fast provisioning, and distributed locations Packet is a consummate platform for building out the Edge.

For this experiment, we are using the following infrastructure, provided by Packet:

Name Location Type
AMS1 Amsterdam, Netherlands c2.medium.x86
BOS2 Boston, MA USA c2.medium.x86
DFW1 Dallas, TX USA c2.medium.x86
DFW2 Dallas, TX USA x1.small.x86
EWR1 Parsippany, NJ USA c2.medium.x86
FRA2 Frankfurt, Germany c2.medium.x86
HKG1 Hong Kong, China x1.small.x86
IAD1 Ashburn, VA USA x1.small.x86
LAX1 Los Angeles, CA USA x1.small.x86
MRS1 Marseille, France x1.small.x86
NRT1 Tokyo, Japan x1.small.x86
ORD2 Chicago, IL USA c2.medium.x86
ORD3 Niles, IL USA c2.medium.x86
SIN1 Singapore x1.small.x86
SJC1 Sunnyvale, CA USA x1.small.x86
SYD1 Sydney, Australia x1.small.x86
YYZ1 Toronto, ON, Canada x1.small.x86

The underlying cloud is running OpenNebula v.5.8 “Edge”, and is instantiated on a Packet host in its Parsippany, NJ (USA) location.

This cloud then deploys and configures the host clusters, using the new OpenNebula provisioning feature, at the remaining locations. Each provision run produces a new ready-to-use dedicated OpenNebula cluster with its datastores, virtual networks and hosts.

The OpenNebula front-end host in the Parsippany was prepared using the miniONE tool, which automates the OpenNebula single-node evaluation installation, completing in just 4 minutes. For the host, we took the CentOS 7 operating system and c2.medium.x86 (https://www.packet.com/cloud/servers/c2-medium-epyc/) hardware configuration.

Hardware and Game Selection

Host clusters deployed on the locations world-wide are of 2 hardware configurations (x1.small.x86 and c2.medium.x86) with the Ubuntu 18.04 LTS operating system and KVM hypervisor. Hosts are running the OpenNebula managed KVM virtual machines with the Debian 9 guest operating system and gaming server service for the online FPS game “Wolfenstein: Enemy Territory”.

We chose this service for its maturity and simplicity. The real game which is used to demonstrate the functionality is “Enemy Territory: Legacy”. It’s an open source project that provides a compatible client (and server) for the game “Wolfenstein: Enemy Territory”.  And as we walk through the details of this assumed “product launch”, we have direct insight into the latencies of all the servers, which we will review below.

A key feature for the successful experiment is having the gaming server services reachable from the public Internet for everyone. The OpenNebula installation was configured to manage the ad-hoc public IP ranges provided by Packet and assign them to the virtual machines over the standard OpenNebula IP management interfaces. Virtual machines can now have all their services exposed to the public transparently.

Experiment Execution

Watch the step-by-step video of the use case execution (7:28)

Each provision run prepares a cluster in one location in several phases.

  • Phase 1:  The new physical hosts are allocated and deployed on the Packet, getting the clean installation of the chosen operating system.
  • Phase 2:  The OpenNebula provision feature installs and configures the hosts to be able to run the KVM hypervisor.
  • Phase 3:  New hosts are added into the OpenNebula as hypervisor hosts and the OpenNebula front-end transfers own drivers and proceed with the monitoring.
  • Phase 4:  We are ready and starting the virtual machines.
  • Phase 5:  The gaming server service is automatically installed right on a boot.

The precise installation steps are passed to each VM over the OpenNebula specific metadata (contextualization). At the end, there is a fully working game server automatically included on the list of all the other public game servers world-wide.

Hosts and gaming service VMs were started from the base OS images and configured dynamically from scratch. No images with pre-installed or pre-configured services were used. As each provision execution prepares only a cluster in a single location, several independent provisions against different locations were started in parallel.

The following table shows the actual timing of each phase for various locations:

Name Location
Phase 1
Deploy host
Phase 2
Configure
KVM host
Phase 3
Monitor
by ONE
Phase 4
Start VM
Phase 5
Bootstrap
game svc
TOTAL
AMS1 462 163 98 23 82 831
BOS2 444 87 11 23 105 673
DFW1 309 129 44 15 78 579
DFW2 427 125 44 22 86 707
EWR1 548 77 3 8 109 748
FRA2 346 168 97 21 84 719
HKG1 351 352 375 376 146 1504
IAD1 305 93 12 32 80 525
LAX1 342 172 77 17 96 707
MRS1 329 194 104 24 75 729
NRT1 350 328 241 46 138 1105
ORD2 452 103 24 7 89 678
ORD3 438 103 25 36 85 691
SIN1 329 424 320 58 108 1242
SJC1 341 177 89 19 117 746
SYD1 345 462 342 60 171 1383
YYZ1 308 102 16 9 82 520

One can see that the quickest of all deployments was the Toronto (YYZ1), taking 520 seconds (just over 8 minutes) in total. The longest was in Hong Kong (HKG1), taking 1504 seconds (just over 25 minutes).

Times to deploy a host on Packet are quite consistent, with low variance. When we configure the hosts as a KVM hypervisor, the latency between front-end and host, or performance of the nearest OS packages mirrors in the locality is taken into account. Time to monitor the host by OpenNebula is only affected by the remote host latency and network throughput.

The same applies for the virtual machine start, as the base VM image (320 MiB) must initially be transferred from the front-end to each hypervisor host. For the distant hosts to the front-end, the image transfer times can be pretty long (for example 171 seconds to Sydney, SYD1). During the game service bootstrap, we depend mainly on the performance of the nearest OS packages mirror.

Below is a screenshot of the deployed and configured servers hosting the Enemy Territory video game. If you pay attention to the “PING” metric, you’ll notice the latency measured from the client from where we orchestrated this exercise (Brno, Czech Republic) to the various nodes.  Understandably, the host location with the shortest latency was Frankfurt, Germany (FRA2) with 18ms. Compare that to a latency from Czech Republic to Sydney, Australia and you’ll see a latency of 331ms (almost 20x longer).

The ideal situation would be to stand up resources as close to the user base as possible.  So users in relative proximity to the nodes will experience latencies below 10ms.

So, with this OpenNebula distributed cloud infrastructure using core and edge bare metal resources by Packet, players of Enemy Territory will be directed to the node with the shortest latency, providing optimal performance. And the infrastructure admin will have the flexibility to flex resources in each cluster in accordance with the active traffic.  

And from the standpoint of clearing out the entire cloud architecture – the simultaneous execution of the host deletions took no more than 49 seconds.

What does a platform like this cost?

Packet has an innovative business model and a strong foothold that at the forefront of the rapidly developing technology ecosystem. It is worthwhile digging deeper into Packet’s offerings. The hourly rate for the resources hosting our Enemy Territory video game distributed cloud cost no more than $11.40/hour!

The conclusion is clear

OpenNebula v.5.8 Edge takes a fresh approach to creating a private distributed cloud by not only broadening support for lightweight LXD containers, but as we have seen here, by integrating support for simple cloud deployment on bare metal resources from providers like Packet. An administrator having the ability to create and manage a distributed private cloud with nodes in 17 locations around the world, deploy and configure them with a few clicks of a button, and to subsequently be able to flex those configurations on the fly – all within 25 minutes and for under $12 / hour – that sounds like a perfect building block for platforms of today, and tomorrow.

Speak at OpenNebulaConf 2019 in Barcelona, Spain!

It’s great to attend the OpenNebulaConf, yet being a speaker – even moreso! Come share your insights and experiences with the user community. Whether you are a seasoned speaker or a first-timer, it’s of little consequence. This is a great opportunity to connect with your peers, and to collaborate with the broader OpenNebula community.

Presentation topics are wide open. If you have a dynamic perspective or unique experiences to share, submit a proposal!

Check out the details and sign up.

We look forward to welcoming you in Barcelona!

 

A new appliance in the Marketplace:  Kubernetes = K8s

We are happy to announce a new addition to our steadily growing OpenNebula’s Marketplace. This time we are bringing to you the most popular container orchestration platform – Kubernetes. As with the previously introduced appliances (you can read more about them in our previous blogpost), our Kubernetes appliance too gives you a “press-of-a-button” simple option of how to create and deploy a functional service.

In the past, Kubernetes was notoriously hard to setup, that is the reason why projects like Rancher sprung up…(Do you want it as a future appliance? Let us know!) We also tried to make the creation of K8s clusters much simpler for you. The appliance supports multiple contextualization parameters to bend to your needs and to your required configuration. This works in very much the same spirit as all the other ONE service appliances.

On top of this, we extended the simplicity and versatility of this appliance’s usage with OneFlow service support, which makes perfect sense for Kubernetes clusters. Now you can deploy a whole K8s multi-node cluster with just one click. More info can be found in our Service Kubernetes documentation.

Kubernetes, Docker, microservices, containers and all those other trendy cloud technologies and terminologies can become confusing sometimes and not everyone is fully-versed in these new topics. (Did you hear about DevOps and CI / CD?) So we better clarify what exactly our Kubernetes appliance does for you and what it doesn’t.

This service appliance provides you with a K8s cluster (one master node and arbitrary number of worker nodes – including zero). Every node is just a regular VM, with which you are familiar. OpenNebula does NOT manage containers or pods inside a created K8s cluster. When you deploy this service appliance, you get a K8s cluster which exposes the Kubernetes API (on a designated ip address of the master node). You can access it via kubectl or UI dashboard (the picture below) to create pods, deployments, services etc. You can also add more nodes to the cluster any time later using the contextualization. But other than that, you are in charge and it is up to you to keep it up and running.

Have a look in the OpenNebula Marketplace.

Check out the video screencast of how to get started with the k8s appliance

 

OpenNebula Edge – Maintenance release v.5.8.1 is now available!

There’s plenty to be excited about with 5.8 Edge – and now we have released a maintenance release v.5.8.1, with bug fixes and a set of new minor features, which include:

  • Add timepicker in relative scheduled actions
  • Check vCenter cluster health in monitoring
  • Implemented nested filters AND and OR when filtering from CLI
  • Added input for command to be executed in the LXD container through a VNC terminal
  • Updated ceph requirements for LXD setups
  • Extended logs in LXD actions with the native container log
  • New API call one_vmpool_infoextended
  • Added sunstone banner official support

Check the release notes for the complete set of new features and bug fixes.

Relevant Links