Our monthly newsletter contains the major achievements of the OpenNebula project and its community this August.

Technology

The team is working to release a new maintenance release of Blue Flash shortly, version 5.6.1. It includes several bugfixes as well as non disruptive minor enhancements like revisited quota categories for running and total VMs. Included in the bugfixes comes a new way of handling the DB upgrades so different encodings do not break the final OpenNebula install. Stay tuned!

Also, 5.8 roadmap is getting shape. One of the novelties will be support for a new container technology. This is still work in progress, so we’ll disclose it in the next newsletter (or before if you are up to date in our development portal). How’s that for a cliffhanger?

We are proud to be among the first batch of technologies which are ready to manage VMware clouds on AWS, in their new shiny service. Check out more in our blog.

A new version of vOneCloud, 3.2, has been released this month. vOneCloud 3.2 is powered by OpenNebula 5.6 ‘Blue Flash’, and, as such, includes functionality presents in Blue Flash relevant to vOneCloud, like a revamped import mechanism, overall driver performance, VNC options for Wild VMs, network creation reworked, migrate VMs between clusters, marketplace (this one is a biggie!), Docker integration,Schedule periodic actions, and more. If you have a vSphere infrastructure, and you want to turn it into a multi-tenant private cloud, with a slick self-service provisioning portal, give it a try!

Community

It is always pleasant to see how people from the community are engaged during these summer months. We want to give a shout out and two big thumbs up to everyone that helps newcomers to OpenNebula in the community support forum! This greatly helps the project.

A report by the European Union on open source and standards is bound to talk about OpenNebula, and this one indeed does. Is worth reading it to see the scope of your favorite CMP project.

OpenNebula was designed to be Cloud-API agnostic. It provides Cloud consumers with choice of interfaces, from open Cloud to de-facto standards. OpenNebula does not try to reinvent the wheel and implements existing standards when available.

Also about standards, we are very proud that OpenNebula is the first reference implementation of the OCCI cloud computing API standard.

Outreach

Remember that if you register for the OpenNebulaConf 2018 before the 15th of September you will have a 20% discount! Check out the excellent keynotes and talks in the agenda, this conference is packed with amazing feedback from community members, not to miss out!

OpenNebulaConf EU, Amsterdam 2018 is sponsored by StorPool, Linbit and NTS as Platinum Sponsor and Virtual Cable SLU and root.nl as Silver Sponsor. There are still spots available to get the most of OpenNebulaConf 2018 by joining our Sponsor Program. Read more about how to sponsor and the benefits here.

An OpenNebula team representation attended the VMworld 2018 US in Las Vegas, participating in the OpenNebula and vOneCloud booth. In case you missed out, and you want an OpenNebula pen, stickers and a view of the latest features of OpenNebula with a live demo, you still have the chance to come to Barcelona this November to the european VMworld!


This month Hitachi Vantara held a TechDay in Santa Clara, where a hands-on tutorial was given to the attendees, and several quality talks, including a really interesting one by the hosts on their journey from vCloud Director to OpenNebula.

Also, if you are in the neighborhood, do not miss the following TechDay and get your free OpenNebula training session!

OpenNebula Conf 2018 is getting closer and we would like to announce NTS Netzwerk Telekom Service AG as new Platinum Sponsor.

If you want to participate in OpenNebulaConf and meet NTS and other OpenNebula users and partners, remember that early bird registration with 20% discount is available until September 15th. Also, if your company is interested in sponsoring OpenNebulaConf 2018 there are still slots.

About NTS Captain (Cloud Automation Platform)

In conventional IT departments, workload and complexity are constantly on the increase. However, the respective IT resources are not growing at the same pace. As a result, very often problems such as inefficiency, long waiting times, missing standards and a decentralized management occur. Our new product NTS Captain enables IT departments to present itself as an internal service provider and thus to deal with queries in a fast and efficient way.

With the help of NTS Captain, NTS customers are changing their IT organizations into agile internal infrastructure providers which deliver answers to new challenges such as DevOps. In this way, customer have a much tighter grip on their IT. NTS Captain is based on OpenNebula and can be integrated into the existing VMware environment as a self-service platform without any issues.

About NTS

No matter where you are on your way into the Cloud, NTS as a professional consultant will be able to make the right choice for your Cloud strategies! We gladly support you with our expertise when implementing Cloud strategies and we offer comprehensive advice along the entire value chain. We develop individual Cloud strategies and by using “Cloud methodology” synergies are created that make our customers more powerful; thanks to a versatile IT infrastructure on-premises in the private Cloud or in the public Cloud.


OpenNebula and VMware have just announced that OpenNebula is available to customers of VMware Cloud™ on AWS. VMware Cloud on AWS brings together VMware’s enterprise-class Software-Defined Data Center (SDDC) software and elastic, bare-metal infrastructure from Amazon Web Services (AWS) to give organizations consistent operating model and application mobility for private and public cloud. OpenNebula enables cloud orchestration and provisioning features to customers of VMware Cloud on AWS. OpenNebula provides cloud provisioning features integrating on-premise vSphere deployments with VMware Cloud on AWS.

The OpenNebula team will be present at VMworld US next week in Las Vengas with a booth (#2008) dedicated to showcase the new features of OpenNebula 5.6 and vOneCloud 3.2. OpenNebula 5.6 has been validated and is supported on VMware Cloud on AWS. Customers can contact the support team through the commercial support portal to know specific configuration and limitations.

The press release is available here.

 

 

OpenNebula Systems has just announced the availability of vOneCloud version 3.2. This is the first vOneCloud release that offers full storage and network management capabilities.

vOneCloud 3.2 is powered by OpenNebula 5.6 ‘Blue Flash’, and, as such, includes functionality present in Blue Flash relevant to vOneCloud:

  • Revamped import mechanism, vOneCloud Sunstone import of vCenter resources has been greatly streamlined.
  • Overall driver performance, all operations, specially monitoring, run quicker and consuming less resources.
  • VNC options for Wild VMs, now they can be defined at import time to avoid collisions.
  • Network creation reworked, with more admin feedback in the network representation.
  • Migrate VMs between clusters, now is possible to migrate VMs between different vCenter clusters from voneCloud.
  • Marketplace, vOneCloud users and admins can now enjoy the OpenNebula Systems public and private marketplaces to easily download new appliances.
  • Docker integrationeasily build a Docker fabric using vOneCloud.
  • Schedule periodic actions, now with time relative to VM creation. Check the VM Template creation dialog for options.

Multiple bugfixes and documentation improvements have been included in this version. The complete list of changes can be checked on the development portal.

OpenNebula Systems will run a booth at VMworld 2018 US in Las Vegas and at VMworld 2018 EU in Barcelona, with live demos of the new version.

vOneCloud 3.2 has been certified with support for vSphere 5.5, 6.0 and 6.5.

Relevant Links

This monthly newsletter gives an overview of the work and achievements during the last month by the OpenNebula project and its community.

The Santa Clara’s TechDay is around the corner. If you are in the area by the end of August, do not miss the chance to register.

Technology

The team is working on the roadmap definition for OpenNebula 5.8. Yes! You read it right. 5.6 has been just recently taken out of the oven, but 5.8 is already on the works. There is still time to influence the roadmap, so please feel free to steer towards our development page in GitHub and let us know about which cool features we can add to your favourite cloud management platform.

After the recent release of vOneCloud 3.0.7, the team is also working on a new version of vOneCloud (3.2), based on OpenNebula ‘Blue Flash’ 5.6.0, to bring the innovations in the vCenter driver to vOneCloud: stability and performance improvements, new features like a extended multi cluster support or a redesigned importation workflow with new Sunstone tabs.

In case you haven’t heard, AWS now offers a bare metal service as another choice of EC2 instances. This condition enables you to leverage the highly scalable and available AWS public cloud infrastructure in order to deploy your own Private Cloud platform based on full virtualization. We’ve prepared a post describing in high detail how can you deploy an OpenNebula instance in AWS bare metal to build a private cloud on a public cloud provider.

During the last months we have been working on a new internal project to enable disaggregated private clouds. The next OpenNebula release will bring the tools and methods needed to grow your private cloud infrastructure with physical resources from bare-metal cloud, initially individual hosts but eventually complete clusters, running on a remote bare-metal cloud providers.

Community

It appears that not everyone is in the beach this past month of July. The OpenNebula community is engaged and vibrant as ever, let us highlight a few examples.

Our friends at Nordeus agrees with us that OpenNebula and Ansible are a match made in heaven. See this blog post on how they manage their virtual infrastructure with Ansible modules that talk with OpenNebula. Delicious!

And now for a reisited blast from the past. This article describes a vulnerability in OpenNebula which has been fixed a while back, it is a very interesting security read. Also, it describes OpenNebula in a very to the point paragraph, which we would like to highlight further.

By relying on standard Linux tools as far as possible, OpenNebula reaches a high level of customizability and flexibility in hypervisors, storage systems, and network infrastructures.

We love community feedback. The critical one because it makes us improve further. And the good one, because it makes us blush, like this tweet about a smooth upgrade to 5.6. Smooth upgrade is our sign of identity!

Outreach

Remember that if you register for the OpenNebulaConf 2018 before the 15th of September you will have a 20% discount! Check out the excellent keynotes and talks in the agenda, this conference is packed with amazing feedback from community members, not to miss out!

OpenNebulaConf EU, Amsterdam 2018 is sponsored by StorPool, Linbit and NTS as Platinum Sponsor and Virtual Cable SLU and root.nl as Silver Sponsor. There are still spots available to get the most of OpenNebulaConf 2018 by joining our Sponsor Program. Read more about how to sponsor and the benefits here.

Members of the OpenNebula team will be presenting a new version of vOneCloud, alongside OpenNebula 5.6.0, in the VMworld 2018 US in Las Vegas. If you are around, don’t forget to pass by booth 2008 and chat with us! Also, we will be featuring a booth at VMworld EU 2018 that will be held the 20th of November in Barcelona.

Also, if you are in the neighborhood, do not miss the following two TechDays and get your free OpenNebula training session!

Wishing you the best summer!

We are organizing a TechDay in Santa Clara, CA, on the 30th of August hosted by Hitachi Vantara.

 

 

 

This event is a great chance to meet and share knowledge among cloud enthusiasts.

As usual we will have an OpenNebula hands-on tutorial in the morning and some talks in the afternoon by cloud experts from Hitachi and OpenNebula Systems.

Due to the limited availability of seats, early registration is strongly recommended to ensure your participation.

See you in Santa Clara!

 

We are organizing a TechDay on the 26th of September in Frabkfurt in collaboration with our friends from LINBIT.

 

 

 

 

 

This event is a great chance to meet and share knowledge among cloud enthusiasts.

As usual we will have an OpenNebula hands-on tutorial in the morning and some talks in the afternoon by cloud experts from LINBIT, Mellanox, Canonical and 24th Technology.

Make sure you register soon as possible because the seats are almost gone!.

See you in Frankfurt!

 

Next 26-30 August VMworld 2017 US will be held in Las Vegas. This is a must attend event where almost everyone with an interest in virtualization and cloud computing will be networking with industry experts.

The OpenNebula team will be present at VMworld with a booth dedicated to showcase the new upcoming vOneCloud 3.2 (release date in a few days incorporating the new OpenNebula 5.6), the open source replacement for VMware vCloud. There will be a focus on new features such as multiple cluster network support and vCenter cluster migration, and tools such as vCenter Marketplace and a new OnevCenter Import command to easily import any vCenter resource.

If you are planning to attend VMworld next month, make sure you register and do not forget to come around our booth, 2008. You will be able to see in a live demo how a VMware based infrastructure can be turned into a cloud with a slick, fully functional self-service portal to deliver a VM catalog to your end users, in 5 minutes!.

The OpenNebula team is proud to announce a new stable version of the leading open source Cloud Management Platform. OpenNebula 5.6 (Blue Flash) is the fourth major release of the OpenNebula 5 series. A significant effort has been applied in this release to enhance features introduced in 5.4 Medusa, while keeping an eye in implementing those features more demanded by the community. A massive set of improvements happened at the core level to increase robustness and scalability, and a major refactor happened in the vCenter integration, particularly in the import process, which has been streamlined. Virtually every component of OpenNebula has been reviewed to target usability and functional improvements, trying to keep API changes to a minimum to avoid disrupting ecosystem components.

In this release several development efforts have been invested in making OpenNebula even better for large scale deployments. This improvements includes both, changes in OpenNebula core to better handle concurrency as well as refined interfaces. Sunstone dashboard has been redesigned to provided sensible information in a more responsive way. Sunstone also features some new styling touches here and there, and it has been updated to version 5 of Fontawesome.

Blue Flash also includes several quality of life improvements for end-users. In particular, it is now possible to schedule periodic actions on VMs. Want to shutdown your VM every Friday at 5p.m. and start it on Monday 7p.m. just before work… We got you covered. Also, don’t to want accidentally terminate that important VM or want to freeze a Network; now you can set locks on common resources to prevent actions to be performed on them.

5.6. ends the major redesign on the vCenter driver started in 5.4. The new integration with vCenter features stability and performance improvements, as well as important new features like a extended multicluster support or a redesigned importation workflow with new Sunstone tabs as well as a new CLI.

New integrations also bring Docker management. Any OpenNebula cloud can import from the marketplace a VM prepared to act as docker engine, allowing the deployment of Docker applications on top. Also, an integration with Docker Machine is available to seamlessly manage docker engines without having to interact with OpenNebula APIs or interfaces.

Following our tradition this OpenNebula release is named after NGC 6905, also known as the Blue Flash Nebula, a planetary nebula in the constellation Delphinus. It was discovered by William Herschel in 1784.

OpenNebula 5.6 Blue Flash is considered to be a stable release and as such, and update is available in production environments.

The OpenNebula project would like to thank the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

The SetUID/SetGID functionality for VM Templates is funded by University of Louvain. The Ceph drivers enabling VM disks in the hypervisor local storage are funded by Flexyz B.V.

Relevant Links

Intro

Given the fact that AWS now offers a bare metal service as another choice of EC2 instances, you are now able to deploy virtual machines based on HVM technologies, like KVM, without tackling the heavy performance overhead imposed by nesting virtualization. This condition enables you to leverage the highly scalable and available AWS public cloud infrastructure in order to deploy your own cloud platform based on full virtualization.

Architecture Overview

The goal is to have a private cloud running KVM virtual machines, able to to communicate each other, the hosts, and the Internet, running on remote and/or local storage.

Compute

I3.metal instances, besides being bare metal, have a very high compute capacity. We can create a powerful private cloud with a small number of these instances roleplaying worker nodes.

Orchestration

Since OpenNebula is a very lightweight cloud management platform, and the control plane doesn’t require virtualization extensions, you can deploy it on a regular HVM EC2 instance. You could also deploy it as a virtual instance using a hypervisor running on an i3.metal instance, but this approach adds extra complexity to the network.

Storage

We can leverage the i3.metal high bandwidth and fast storage in order to have a local-backed storage for our image datastore. However, having a shared NAS-like datastore would be more productive. Although we can have a regular EC2 instance providing an NFS server, AWS provides a service specifically designed for this use case, the EFS.

Networking

OpenNebula requires a service network, for the infrastructure components (frontend, nodes and storage) and instance networks for VMs to communicate. This guide will use Ubuntu 16.04 as base OS.

Limitations

AWS provides a network stack designed for EC2 instances. Since you don’t really control interconnection devices like the internet gateway, the routers or the switches. This model has conflicts with the networking required for OpenNebula VMs.

  • AWS filters traffic based on IP – MAC association
    • Packets with a source IP can only flow if they have the specific MAC
    • You cannot change the MAC of an EC2 instance NIC
  • EC2 Instances don’t get public IPv4 directly attached to their NICs.
    • They get private IPs from a subnet of the VPC
    • AWS internet gateway (an unmanaged device) has an internal record which maps pubic IPs to private IPs of the VPC subnet.
    • Each private IPv4 address can be associated with a single Elastic IP address, and vice versa
  • There is a limit of 5 Elastic IP addresses per region, although you can get more non-elastic public IPs.
  • Elastic IPs are bound to a specific private IPv4 from the VPC, wich bounds them to specific nodes
  • Multicast traffic is filtered

If you try to assign an IP of the VPC subnet to a VM, traffic won’t flow because AWS interconnection devices don’t know the IP has been assigned and there isn’t a MAC associated ot it. Even if it would has been assigned, it would had been bound to a specific MAC. This leaves out of the way the Linux Bridges and the 802.1Q is not an option, since you need to tag the switches, and you can’t do it. VXLAN relies on multicast in order to work, so it is not an option. Openvswitch suffers the same link and network layer restriction AWS imposes.

Workarounds

OpenNebula can manage networks using the following technologies. In order to overcome AWS network limitations it is suitable to create an overlay network between the EC2 instances. Overlay networks would be ideally created using the VXLAN drivers, however since multicast is disabled by AWS, we would need to modify the VXLAN driver code, in order to use single cast. A simpler alternative is to use a VXLAN tunnel with openvswitch. However, this lacks scalability since a tunnel works on two remote endpoints, and adding more endpoints breaks the networking. Nevertheless you can get a total of 144 cores and 1TB of RAM in terms of compute power. The network then results in the AWS network, for OpenNebula’s infrastructure and a network isolated from AWS encapsulated over the Transport layer, ignoring every AWS network issues you might have. It is required to lower the MTU of the guest interfaces to match the VXLAN overhead.

In order to grant VMs Internet access, it is required to NAT their traffic in an EC2 instance with a public IP to its associated private IP. in order to masquerade the connection originated by the VMs. Thus, you need to set an IP belonging to the VM network the openvswitch switch device on an i3.metal, enablinInternet-VM intercommunicationg the EC2 instance as a router.

In order for your VMs to be publicly available from the Internet you need to own a pool of available public IP addresses ready to assign to the VMs. The problem is that those IPs are matched to a particular private IPs of the VPC. You can assign several pair of private IPs and Elastic IPs to an i3.metal NIC. This results in i3.metal instances having several available public IPs. Then you need to DNAT the traffic destined to the Elastic IP to the VM private IP. You can make the DNATs and SNATs static to a particular set  of private IPs and create an OpenNebula subnet for public visibility containing this address range. The DNATs need to be applied in every host in order to give them public visibility wherever they are deployed. Note that OpenNebula won’t know the addresses of the pool of Elastic IPs, nor the matching private IPs of the VPC. So there will be a double mapping, the first, by AWS, and the second by the OS (DNAT and SNAT) in the i3.metals :

 Elastic IP → VPC IP → VM IP→ VPC IP → Elastic IP

→ IN → Linux → OUT

 

 

Setting up AWS Infrastructure for OpenNebula

You can disable public access to OpenNebula nodes (since they are only required to be accessible from frontend) and access them via the frontend, by assigning them the ubuntu user frontend public key or using sshuttle.

Summary

  • We will launch 2 x i3.metal EC2 instances acting as virtualization nodes
  • OpenNebula will be deployed on a HVM EC2 instance
  • An EFS will be created in the same VPC the instances will run on
  • This EFS will be mounted with an NFS client on the instances for OpenNebula to run a shared datastore
  • The EC2 instances will be connected to a VPC subnet
  • Instances will have a NIC for the VPC subnet and a virtual NIC for the overlay network

Security Groups Rules

  1. TCP port 9869 for one frontend
  2. UDP port 4789 for VXLAN overlay network
  3. NFS inbound for EFS datastore (allow from one-x instances subnet)
  4. SSH for remote access

Create one-0 and one-1

This instance will act as a router for VMs running in OpenNebula.

  1. Click on Launch an Instance on EC2 management console
  2. Choose an AMI with an OS supported by OpenNebula, in this case we will use Ubuntu 16.04.
  3. Choose an i3.metal instance, should be at the end of the list.
  4. Make sure your instances will run on the same VPC as the EFS.
  5. Load your key pair into your instance
  6. This instance will require SGs 2 and 4
  7. Elastic IP association
    1. Assign several private IP addresses to one-0 or one-1
    2. Allocate Elastic IPs (up to five)
    3. Associate Elastic IPs in a one-to-one fashion to the assigned private IPs of one-0 or one-1

Create one-frontend

  1. Follow the same steps of the nodes creation, except
    1. Deploy a t2.medium EC2 instance
    2. It is also required SG1

Create EFS

  1. Click on create file system on EFS management console
  2. Choose the same VPC the EC2 instances are running on
  3. Choose SG 3
  4. Add your tags and review your config
  5. Create your EFS

After installing the nodes and the frontend, remember to follow shared datastore setup in order to deploy VMs using the EFS. In this case you need to mount the filesystem exported by the EFS on the corresponding datastore id the same way you would with a regular NFS server. Take a look to EFS doc to get more information.

Installing OpenNebula on AWS infrastructure

Follow Front-end Installation.

Setup one-x instances as OpenNebula nodes

Install opennebula node, follow KVM Node Installation. Follow openvswitch setup, don’t add the physical network interface to the openvswitch.

You will create an overlay network for VMs in a node to communicate with VMs in the other node using a VXLAN tunnel with openvswitch endpoints.

Create an openvswitch-switch. This configuration will persist across power cycles.

apt install openvswitch-switch

ovs-vsctl add-br ovsbr0

Create the VXLAN tunnel. The remote endpoint will be one-1 private ip address.

ovs-vsctl add-port ovsbr0 vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=10.0.0.12

This is one-0 configuration, repeat the configuration above in one-1 changing the remote endpoint to one-0.

Setting up one-0 as the gateway for VMs

Set the network configuration for the bridge.

ip addr add 192.168.0.1/24 dev ovsbr0

ip link set up ovsbr0

In order to make the configuration persistent

echo -e "auto ovsbr0 \niface ovsbr0 inet static \n       address 192.168.0.1/24" >> interfaces

Set one-0 as a NAT gateway for VMs in the overlay network to access the Internet. Make sure you SNAT to a private IP with an associated public IP.  For the NAT network.

iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j SNAT --to-source 10.0.0.41

Write the mappings for the public visibility in both one-0 and one-1 instance.

iptables -t nat -A PREROUTING -d 10.0.0.41 -j DNAT --to-destination 192.168.0.250

iptables -t nat -A PREROUTING -d 10.0.0.42 -j DNAT --to-destination 192.168.0.251

iptables -t nat -A PREROUTING -d 10.0.0.43 -j DNAT --to-destination 192.168.0.252

iptables -t nat -A PREROUTING -d 10.0.0.44 -j DNAT --to-destination 192.168.0.253

iptables -t nat -A PREROUTING -d 10.0.0.45 -j DNAT --to-destination 192.168.0.254

Make sure you save your iptables rules in order to make them persist across reboots. Also, check /proc/sys/net/ipv4/ip_forward is set to 1, opennebula-node package should have done that.

Defining Virtual Networks in OpenNebula

You need to create openvswitch networks with the guest MTU set to 1450. Set the bridge to the bridge with the VXLAN tunnel, in this case, ovsbr0.

For the public net you can define a network with the address range limited to the IPs with DNATs and another network (SNAT only) in a non-overlapping address range or in an address range containing the DNAT IPs in the reserved list. The gateway should be the i3.metal node with the overlay network IP assigned to the openvswitch switch. You can also set the DNS to the AWS provided DNS in the VPC.

Public net example:

Testing the Scenario BRIDGE = "ovsbr0"
 DNS = "10.0.0.2"
 GATEWAY = "192.168.0.1"
 GUEST_MTU = "1450"
 NETWORK_ADDRESS = "192.168.0.0"
 NETWORK_MASK = "255.255.255.0"
 PHYDEV = ""
 SECURITY_GROUPS = "0"
 VLAN_ID = ""
 VN_MAD = "ovswitch"

Testing the Scenario

You can import a virtual appliance from the marketplace to make the tests. This should work flawlessly since it only requires a regular frontend with internet access. Refer to the  marketplace documentation.

VM-VM Intercommunication

Deploy a VM in each node …

and ping each other.

Internet-VM Intercommunication

Install apache2 using the default OS repositories and view the default index.html file when accessing the corresponding public IP port 80 from our workstation.

First check your public IP

Then access the port 80 of the public IP. Just enter the IP address in the browser.