OpenNebula Community and Customer Success Manager Introduction

Hello OpenNebula Community.

I want to take a brief moment to introduce myself, as I have recently joined the OpenNebula project and I will be working very closely with you.  My name is Michael Abdou, and I am the new Community and Customer Success Manager at OpenNebula. I am extremely thrilled and eager to join this Community, and to have the opportunity to help foster a dynamic and collaborative environment, and to be a part of this effort to bring value and innovation to the marketplace.  I have worked most of my career in the United States for a Fortune 100 Insurance company in the IT Delivery space. I initially started out as a BI developer, and later moved into the management track, leading various teams across Development, Analysis, and Quality Assurance. In the last few years, a lot of my focus began to shift from the “conventional, mainstream” technologies and delivery to a more courageous eye on the innovative and emerging technologies – in my case, helping to make the transition to Big Data architecture, as well as developing Platform as a Service (PaaS) solutions.

A main focus of my job here at OpenNebula will be to help foster an environment of pride and passion about our project, to make sure that everyone has a practical and convenient channel to contribute, to promote and cultivate our spirit of collaboration, and to keep our focus on the growth and success of the OpenNebula project always within sight.  I am here to support you, the Community, and to make sure we all have an exceptional user experience. What I ask of you is that you continue to be curious and open-minded. Share your experiences and insights. In the long run, this will only help our Community to grow. If you have any questions, concerns, or suggestions, please always feel free to reach out to me.

I really look forward to working together with you.

Best regards,
Michael


How Online RSS Reader Inoreader Migrated From Bare-metal Servers to OpenNebula + StorPool

Prolog

Building and maintaining a cloud RSS reader requires resources. Lots of them! Behind the deceivingly simple user interface there is a complex backend with huge datastore that should be able to fetch millions of feeds in time, store billions of articles indefinitely and make any of them available in just milliseconds – either by searching or simply by scrolling through lists. Even calculating the unread counts for millions of users is enough of a challenge that it deserves a special module for caching and maintaining. The very basic feature that every RSS reader should have – being able to filter only unread articles, requires so much resource power that it contributes to around 30% of the storage pressure on our first-tier databases.

Until recently we were using bare-metal servers to operate our infrastructure, meaning we deployed services like database and application servers directly on the operating system of the server. We were not using virtualization except for some really small micro-services and it was practically one physical server with local storage broken down into several VMs. Last year we have reached a point where we had a 48U (rack-units) rack full of servers. More than half of those servers were databases, each with its own storage. Usually 4 to 8 spinning disks in RAID-10 mode with expensive RAID controllers equipped with cache modules and BBUs. All this was required to keep up with the needed throughput.

There is one big issue with this setup. Once a database server fills up (usually at around 3TB) we buy another one and this one becomes read-only. CPUs and memory on those servers remain heavily underutilized while the storage is full. For a long time we knew we have to do something about it, otherwise we would soon need to rent a second rack, which would have doubled our bill. The cost was not the primary concern. It just didn’t feel right to have a rack full of expensive servers that we couldn’t fully utilize because their storage was full.

Furthermore redundancy was an issue too. We had redundancy on the application servers, but for databases with this size it’s very hard to keep everything redundant and fully backed up. Two years ago we had a major incident that almost cost us an entire server with 3TB of data, holding several months worth of article data. We have completely recovered all data, but that was close.

 

Big changes were needed!

While the development of new features is important, we had to stop for a while and rethink our infrastructure. After some long sessions and meetings with vendors we have made a final decision:

We will completely virtualize our infrastructure and we will use OpenNebula + KVM for virtualization and StorPool for distributed storage.

 

 

Cloud Management

We have chosen this solution not only because it is practically free if you don’t need enterprise support but also because it is proven to be very effective. OpenNebula is now mature enough and has so many use cases it’s hard to ignore. It is completely open source with big community of experts and has an optional enterprise support. KVM is now used as primary hypervisor for EC2 instances in Amazon EWS. This alone speaks a lot and OpenNebula is primarily designed to work with KVM too. Our experience with OpenNebula in the past few months didn’t make us regret this decision even once.

 

Storage

Now a crucial part of any virtualized environment is the storage layer. You aren’t really doing anything if you are still using the local storage on your servers. The whole idea of virtualization is that your physical servers are expendable. You should be able to tolerate a server outage without any data loss or service downtime. How do you achieve that? With a separate, ultra-high performance fault-tolerant storage connected to each server via redundant 10G network.

There’s EMC‘s enterprise solution, which can cost millions and uses proprietary hardware, so it’s out of our league. Also big vendors doesn’t usually play well with small clients like us. There’s a chance that we will just have to sit and wait for a ticket resolution if something breaks, which contradicts our vision.

Then there’s RedHat’s Ceph, which comes completely free of charge, but we were a bit afraid to use it since nobody at the team had the required expertise to run it in production without any doubt that in any event of a crash we will be able to recover all our data. We were on a very tight schedule with this project, so we didn’t have any time to send someone for trainings. Performance figures were also not very clear to us and we didn’t know what to expect. So we decided not to risk with it for our main datacenter. We are now using Ceph in our backup datacenter, but more on that later.

Finally there’s a one still relatively small vendor, that just so happens to be located some 15 minutes away from us – StorPool. They were recommended to us by colleagues running similar services and we had a quick kick-start meeting with them. After the meeting it was clear to us that those guys know what they are doing at the lowest possible level.
Here’s what they do in a nutshell (quote from their website):

StorPool is a block-storage software that uses standard hardware and builds a storage system out of this hardware. It is installed on the servers and creates a shared storage pool from their local drives in these servers. Compared to traditional SANs, all-flash arrays, or other storage software StorPool is faster, more reliable and scalable.

Doesn’t sound very different from Ceph, so why did we chose them? Here are just some of the reasons:

  • They offer full support for a very reasonable monthly fee, saving us the need to have a trained Ceph expert onboard.
  • They promise higher performance than ceph.
  • They have their own OpenNebula storage addon (yeah, Ceph does too, I know)
  • They are a local company and we can always pick up the phone and resolve any issues in minutes rather than hours or days like it usually ends up with big vendors.

 

The migration

You can read the full story of or migration with pictures and detailed explanations in our blog.

I will try to keep it short and tidy here. Basically we managed to slim down our inventory to half of the previous rack-space. This allowed us to reduce our costs, create enough room for later expansion, which immediately and greatly increasing our compute and storage capacities. We have mostly reused our old servers in the process with some upgrades to make the whole OpenNebula cluster homogenous – same CPU model and memory across all servers, which allowed us to use “host=passthrough” to improve VM performance without the risk of VM crash during a live migration. The process took us less than 3 months with the actual migration happening in around two weeks. While we waited for the hardware to arrive we had enough time to play with OpenNebula in different scenarios, try out VM migrations, different storage drivers and overall try to break it while it’s still in test environment.

 

The planning phase

So after we made our choice for virtualization it was time to plan the project. This happened in November 2017, so not very far from now. We have rented a second rack in our datacenter. The plan was to install the StorPool nodes there and gradually move servers and convert them into hypervisors. Once we move everything we will remove the old rack.

We have ordered 3 servers for the StorPool storage. Each of those servers have room for 16 hard-disks. We have only ordered half of the needed hard-disks, because we knew that once we start virtualizing servers, we will salvage a lot of drives that won’t be needed otherwise.

We have also ordered the 10G network switches for the storage network and new Gigabit switches for the regular network to upgrade our old switches. For the storage network we chose Quanta LB8. Those beasts are equipped with 48x10G SFP+ ports, which is more than enough for a single rack. For the regular Gigabit network, we chose Quanta LB4-M. They have additional 2x10G SFP+ modules, which we used to connect the two racks via optic cable.

We also ordered a lot of other smaller stuff like 10G network cards and a lot of CPUs and DDR memory.  Initially we didn’t plan to upgrade the servers before converting them to hypervisors in order to cut costs. However after some benchmarking we found that our current CPUs were not up to the task. We were using mostly dual CPU servers with Intel Xeon E5-2620 (Sandy Bridge) and they were already dragging even before the Meltdown patches. After some research we chose to upgrade all servers to E5-2650 v2 (Ivy Bridge), which is a 16-core (with Hyper-threading) CPU with a turbo frequency of 3.4 GHz. We already had two of these and benchmarks showed two-fold increase in performance compared to E5-2620.

We also decided to boost all servers to 128G of RAM. We had different configurations, but most servers were having 16-64GB and only a handful were already at 128G. So we’ve made some calculations and ordered 20+ CPUs and 500+GB of memory.

After we placed all orders we had about a month before everything arrive, so we used that time to prepare what we can without additional hardware.

 

The preparation phase

We used the whole December and part of January while waiting for our equipment to arrive to prepare for the coming big migration. We learned how OpenNebula works, tried everything that came to our minds to break it and to see how it behaves in different scenarios. This was a very important part to avoid production mistakes and downtime later.
We didn’t wait for our hardware to arrive. Instead we purchased one old but still powerful server with lots of memory to temporarily hold some virtual machines. The idea was to free up some physical servers, so we can shut them down, upgrade them and convert them into hypervisors in the new rack.

 

The execution phase

After the hardware arrived it was time to install it in the new rack. We started with the StorPool nodes and the network. This way we were able to bring up the storage cluster prior to adding any hypervisor hosts.
      
Now it was time for StorPool to finalize the configuration of the storage cluster and to give us green light to connect our first hypervisor to it. Needless to say, they were quick about it and on the next day we were able to bring in two servers from the old rack and to start our first real OpenNebula instance with StorPool as a storage.

After we had our shiny new OpenNebula cluster with StorPool storage fully working it was time to migrate the virtual machines that were still running on local storage. The guys from StorPool helped us a lot here by providing us with a migration strategy that we had to execute for each VM. If there is interest we can post the whole process in a separate post.

From here on we were gradually migrating physical servers to virtual machines. The strategy was different for each server, some of them were databases, others application and web servers. We’ve managed to migrated all of them with several seconds to no downtime at all. At first we didn’t have much space for virtual machines, since we had only two hypervisors, but at each iteration we were able to convert more and more servers at once.

     

After that each server went through a complete change. CPUs were upgraded to 2x E5-2650 v2 and memory was bumped to 128GB. The expensive RAID controllers were removed from the expansion slots and in their place we installed 10G network cards. Large (>2TB) hard drives were removed and smaller drives were installed just for the OS. After the servers were re-equipped, they were installed in the new rack and connected to the OpenNebula cluster. The guys from StorPool configured each server to have a connection to the storage and verified that it is ready for production use. The first 24 leftover 2TB hard drives were immediately put to work into our StorPool.

 

The result

In just couple of weeks of hard work we have managed to migrate everything!

In the new rack we have a total of 120TB of raw storage, 1.5TB of RAM and 400 CPU cores. Each server is connected to the network with 2x10G network interfaces.

That’s roughly 4 times the capacity and 10 times the network performance of our old setup with only half the physical servers!

The flexibility of OpenNebula and StorPool allows us to use the hardware very efficiently. We can spin up virtual machines in seconds with any combination of CPU, memory, storage and network interfaces and later we can change any of those parameters just as easy. It’s the DevOps heaven!

This setup will be enough for our needs for a long time and we have more than enough room for expansion if need arise.

 

Our OpenNebula cluster

We now have more than 60 virtual machines because we have split some physical servers into several smaller VMs with load balancers for better load distribution and we have allocated more than 38TB of storage.

We have 14 hypervisors with plenty of resources available on each of them. All of them are using the same model CPU, which gives us the ability to use the “host=passthrough” setting of QEMU to improve VM performance without the risk of VM crash during a live migration.

We are very happy with this setup. Whenever we need to start a new server, it only takes minutes to spin up a new VM instance with whatever CPU and memory configuration we need. If a server crashes, all VMs will automatically migrate to another server. OpenNebula makes it really easy to start new VMs, change their configurations, manage their lifecycle and even completely manage your networks and IP address pools. It just works!

StorPool on the other hand takes care that we have all the needed IOPS at our disposal whenever we need them.

 

Goodies

We are using Graphite + Grafana to plot some really nice graphs for our cluster.

We have borrowed the solution from here. That’s what’s so great about open software!

Our team is constantly informed for the health and utilization of our cluster. A glance at our wall-mounted TV screen is enough to tell that everything is alright. We can see both our main and backup data centers, both running OpenNebula. It’s usually all green :)

 

StorPool is also using Grafana for their performance monitoring and they have also provided us with access to it, so we can get insights about what the storage is doing at the moment, which VMs are the biggest consumers, etc. This way we can always know when a VM has gone rogue and is stealing our precious IOPS.

 

Epilog

If you made it this far – Congratulations! You have geeked out as much as we did building this infrastructure with the latest and greatest technologies like OpenNebula and StorPool.

OpenNebula Newsletter – August 2018

Our monthly newsletter contains the major achievements of the OpenNebula project and its community this August.

Technology

The team is working to release a new maintenance release of Blue Flash shortly, version 5.6.1. It includes several bugfixes as well as non disruptive minor enhancements like revisited quota categories for running and total VMs. Included in the bugfixes comes a new way of handling the DB upgrades so different encodings do not break the final OpenNebula install. Stay tuned!

Also, 5.8 roadmap is getting shape. One of the novelties will be support for a new container technology. This is still work in progress, so we’ll disclose it in the next newsletter (or before if you are up to date in our development portal). How’s that for a cliffhanger?

We are proud to be among the first batch of technologies which are ready to manage VMware clouds on AWS, in their new shiny service. Check out more in our blog.

A new version of vOneCloud, 3.2, has been released this month. vOneCloud 3.2 is powered by OpenNebula 5.6 ‘Blue Flash’, and, as such, includes functionality presents in Blue Flash relevant to vOneCloud, like a revamped import mechanism, overall driver performance, VNC options for Wild VMs, network creation reworked, migrate VMs between clusters, marketplace (this one is a biggie!), Docker integration,Schedule periodic actions, and more. If you have a vSphere infrastructure, and you want to turn it into a multi-tenant private cloud, with a slick self-service provisioning portal, give it a try!

Community

It is always pleasant to see how people from the community are engaged during these summer months. We want to give a shout out and two big thumbs up to everyone that helps newcomers to OpenNebula in the community support forum! This greatly helps the project.

A report by the European Union on open source and standards is bound to talk about OpenNebula, and this one indeed does. Is worth reading it to see the scope of your favorite CMP project.

OpenNebula was designed to be Cloud-API agnostic. It provides Cloud consumers with choice of interfaces, from open Cloud to de-facto standards. OpenNebula does not try to reinvent the wheel and implements existing standards when available.

Also about standards, we are very proud that OpenNebula is the first reference implementation of the OCCI cloud computing API standard.

Outreach

Remember that if you register for the OpenNebulaConf 2018 before the 15th of September you will have a 20% discount! Check out the excellent keynotes and talks in the agenda, this conference is packed with amazing feedback from community members, not to miss out!

OpenNebulaConf EU, Amsterdam 2018 is sponsored by StorPool, Linbit and NTS as Platinum Sponsor and Virtual Cable SLU and root.nl as Silver Sponsor. There are still spots available to get the most of OpenNebulaConf 2018 by joining our Sponsor Program. Read more about how to sponsor and the benefits here.

An OpenNebula team representation attended the VMworld 2018 US in Las Vegas, participating in the OpenNebula and vOneCloud booth. In case you missed out, and you want an OpenNebula pen, stickers and a view of the latest features of OpenNebula with a live demo, you still have the chance to come to Barcelona this November to the european VMworld!


This month Hitachi Vantara held a TechDay in Santa Clara, where a hands-on tutorial was given to the attendees, and several quality talks, including a really interesting one by the hosts on their journey from vCloud Director to OpenNebula.

Also, if you are in the neighborhood, do not miss the following TechDay and get your free OpenNebula training session!

NTS to Sponsor OpenNebulaConf 2018

OpenNebula Conf 2018 is getting closer and we would like to announce NTS Netzwerk Telekom Service AG as new Platinum Sponsor.

If you want to participate in OpenNebulaConf and meet NTS and other OpenNebula users and partners, remember that early bird registration with 20% discount is available until September 15th. Also, if your company is interested in sponsoring OpenNebulaConf 2018 there are still slots.

About NTS Captain (Cloud Automation Platform)

In conventional IT departments, workload and complexity are constantly on the increase. However, the respective IT resources are not growing at the same pace. As a result, very often problems such as inefficiency, long waiting times, missing standards and a decentralized management occur. Our new product NTS Captain enables IT departments to present itself as an internal service provider and thus to deal with queries in a fast and efficient way.

With the help of NTS Captain, NTS customers are changing their IT organizations into agile internal infrastructure providers which deliver answers to new challenges such as DevOps. In this way, customer have a much tighter grip on their IT. NTS Captain is based on OpenNebula and can be integrated into the existing VMware environment as a self-service platform without any issues.

About NTS

No matter where you are on your way into the Cloud, NTS as a professional consultant will be able to make the right choice for your Cloud strategies! We gladly support you with our expertise when implementing Cloud strategies and we offer comprehensive advice along the entire value chain. We develop individual Cloud strategies and by using “Cloud methodology” synergies are created that make our customers more powerful; thanks to a versatile IT infrastructure on-premises in the private Cloud or in the public Cloud.

OpenNebula and VMware Announce OpenNebula for VMware Cloud on AWS


OpenNebula and VMware have just announced that OpenNebula is available to customers of VMware Cloud™ on AWS. VMware Cloud on AWS brings together VMware’s enterprise-class Software-Defined Data Center (SDDC) software and elastic, bare-metal infrastructure from Amazon Web Services (AWS) to give organizations consistent operating model and application mobility for private and public cloud. OpenNebula enables cloud orchestration and provisioning features to customers of VMware Cloud on AWS. OpenNebula provides cloud provisioning features integrating on-premise vSphere deployments with VMware Cloud on AWS.

The OpenNebula team will be present at VMworld US next week in Las Vengas with a booth (#2008) dedicated to showcase the new features of OpenNebula 5.6 and vOneCloud 3.2. OpenNebula 5.6 has been validated and is supported on VMware Cloud on AWS. Customers can contact the support team through the commercial support portal to know specific configuration and limitations.

The press release is available here.

 

 

vOneCloud 3.2 Released – Marketplace and Docker

OpenNebula Systems has just announced the availability of vOneCloud version 3.2. This is the first vOneCloud release that offers full storage and network management capabilities.

vOneCloud 3.2 is powered by OpenNebula 5.6 ‘Blue Flash’, and, as such, includes functionality present in Blue Flash relevant to vOneCloud:

  • Revamped import mechanism, vOneCloud Sunstone import of vCenter resources has been greatly streamlined.
  • Overall driver performance, all operations, specially monitoring, run quicker and consuming less resources.
  • VNC options for Wild VMs, now they can be defined at import time to avoid collisions.
  • Network creation reworked, with more admin feedback in the network representation.
  • Migrate VMs between clusters, now is possible to migrate VMs between different vCenter clusters from voneCloud.
  • Marketplace, vOneCloud users and admins can now enjoy the OpenNebula Systems public and private marketplaces to easily download new appliances.
  • Docker integrationeasily build a Docker fabric using vOneCloud.
  • Schedule periodic actions, now with time relative to VM creation. Check the VM Template creation dialog for options.

Multiple bugfixes and documentation improvements have been included in this version. The complete list of changes can be checked on the development portal.

OpenNebula Systems will run a booth at VMworld 2018 US in Las Vegas and at VMworld 2018 EU in Barcelona, with live demos of the new version.

vOneCloud 3.2 has been certified with support for vSphere 5.5, 6.0 and 6.5.

Relevant Links

OpenNebula Newsletter – July 2018

This monthly newsletter gives an overview of the work and achievements during the last month by the OpenNebula project and its community.

The Santa Clara’s TechDay is around the corner. If you are in the area by the end of August, do not miss the chance to register.

Technology

The team is working on the roadmap definition for OpenNebula 5.8. Yes! You read it right. 5.6 has been just recently taken out of the oven, but 5.8 is already on the works. There is still time to influence the roadmap, so please feel free to steer towards our development page in GitHub and let us know about which cool features we can add to your favourite cloud management platform.

After the recent release of vOneCloud 3.0.7, the team is also working on a new version of vOneCloud (3.2), based on OpenNebula ‘Blue Flash’ 5.6.0, to bring the innovations in the vCenter driver to vOneCloud: stability and performance improvements, new features like a extended multi cluster support or a redesigned importation workflow with new Sunstone tabs.

In case you haven’t heard, AWS now offers a bare metal service as another choice of EC2 instances. This condition enables you to leverage the highly scalable and available AWS public cloud infrastructure in order to deploy your own Private Cloud platform based on full virtualization. We’ve prepared a post describing in high detail how can you deploy an OpenNebula instance in AWS bare metal to build a private cloud on a public cloud provider.

During the last months we have been working on a new internal project to enable disaggregated private clouds. The next OpenNebula release will bring the tools and methods needed to grow your private cloud infrastructure with physical resources from bare-metal cloud, initially individual hosts but eventually complete clusters, running on a remote bare-metal cloud providers.

Community

It appears that not everyone is in the beach this past month of July. The OpenNebula community is engaged and vibrant as ever, let us highlight a few examples.

Our friends at Nordeus agrees with us that OpenNebula and Ansible are a match made in heaven. See this blog post on how they manage their virtual infrastructure with Ansible modules that talk with OpenNebula. Delicious!

And now for a reisited blast from the past. This article describes a vulnerability in OpenNebula which has been fixed a while back, it is a very interesting security read. Also, it describes OpenNebula in a very to the point paragraph, which we would like to highlight further.

By relying on standard Linux tools as far as possible, OpenNebula reaches a high level of customizability and flexibility in hypervisors, storage systems, and network infrastructures.

We love community feedback. The critical one because it makes us improve further. And the good one, because it makes us blush, like this tweet about a smooth upgrade to 5.6. Smooth upgrade is our sign of identity!

Outreach

Remember that if you register for the OpenNebulaConf 2018 before the 15th of September you will have a 20% discount! Check out the excellent keynotes and talks in the agenda, this conference is packed with amazing feedback from community members, not to miss out!

OpenNebulaConf EU, Amsterdam 2018 is sponsored by StorPool, Linbit and NTS as Platinum Sponsor and Virtual Cable SLU and root.nl as Silver Sponsor. There are still spots available to get the most of OpenNebulaConf 2018 by joining our Sponsor Program. Read more about how to sponsor and the benefits here.

Members of the OpenNebula team will be presenting a new version of vOneCloud, alongside OpenNebula 5.6.0, in the VMworld 2018 US in Las Vegas. If you are around, don’t forget to pass by booth 2008 and chat with us! Also, we will be featuring a booth at VMworld EU 2018 that will be held the 20th of November in Barcelona.

Also, if you are in the neighborhood, do not miss the following two TechDays and get your free OpenNebula training session!

Wishing you the best summer!

Agenda of TechDay Santa Clara CA – 30AUG18

We are organizing a TechDay in Santa Clara, CA, on the 30th of August hosted by Hitachi Vantara.

 

 

 

This event is a great chance to meet and share knowledge among cloud enthusiasts.

As usual we will have an OpenNebula hands-on tutorial in the morning and some talks in the afternoon by cloud experts from Hitachi and OpenNebula Systems.

Due to the limited availability of seats, early registration is strongly recommended to ensure your participation.

See you in Santa Clara!

 

Agenda of TechDay Frankfurt – 26SEP18

We are organizing a TechDay on the 26th of September in Frabkfurt in collaboration with our friends from LINBIT.

 

 

 

 

 

This event is a great chance to meet and share knowledge among cloud enthusiasts.

As usual we will have an OpenNebula hands-on tutorial in the morning and some talks in the afternoon by cloud experts from LINBIT, Mellanox, Canonical and 24th Technology.

Make sure you register soon as possible because the seats are almost gone!.

See you in Frankfurt!

 

vOneCloud/OpenNebula at VMworld 2018 US in Las Vegas

Next 26-30 August VMworld 2017 US will be held in Las Vegas. This is a must attend event where almost everyone with an interest in virtualization and cloud computing will be networking with industry experts.

The OpenNebula team will be present at VMworld with a booth dedicated to showcase the new upcoming vOneCloud 3.2 (release date in a few days incorporating the new OpenNebula 5.6), the open source replacement for VMware vCloud. There will be a focus on new features such as multiple cluster network support and vCenter cluster migration, and tools such as vCenter Marketplace and a new OnevCenter Import command to easily import any vCenter resource.

If you are planning to attend VMworld next month, make sure you register and do not forget to come around our booth, 2008. You will be able to see in a live demo how a VMware based infrastructure can be turned into a cloud with a slick, fully functional self-service portal to deliver a VM catalog to your end users, in 5 minutes!.