vOneCloud/OpenNebula at VMworld 2018 US in Las Vegas

Next 26-30 August VMworld 2017 US will be held in Las Vegas. This is a must attend event where almost everyone with an interest in virtualization and cloud computing will be networking with industry experts.

The OpenNebula team will be present at VMworld with a booth dedicated to showcase the new upcoming vOneCloud 3.2 (release date in a few days incorporating the new OpenNebula 5.6), the open source replacement for VMware vCloud. There will be a focus on new features such as multiple cluster network support and vCenter cluster migration, and tools such as vCenter Marketplace and a new OnevCenter Import command to easily import any vCenter resource.

If you are planning to attend VMworld next month, make sure you register and do not forget to come around our booth, 2008. You will be able to see in a live demo how a VMware based infrastructure can be turned into a cloud with a slick, fully functional self-service portal to deliver a VM catalog to your end users, in 5 minutes!.

OpenNebula 5.6 ‘Blue Flash’ is Out!

The OpenNebula team is proud to announce a new stable version of the leading open source Cloud Management Platform. OpenNebula 5.6 (Blue Flash) is the fourth major release of the OpenNebula 5 series. A significant effort has been applied in this release to enhance features introduced in 5.4 Medusa, while keeping an eye in implementing those features more demanded by the community. A massive set of improvements happened at the core level to increase robustness and scalability, and a major refactor happened in the vCenter integration, particularly in the import process, which has been streamlined. Virtually every component of OpenNebula has been reviewed to target usability and functional improvements, trying to keep API changes to a minimum to avoid disrupting ecosystem components.

In this release several development efforts have been invested in making OpenNebula even better for large scale deployments. This improvements includes both, changes in OpenNebula core to better handle concurrency as well as refined interfaces. Sunstone dashboard has been redesigned to provided sensible information in a more responsive way. Sunstone also features some new styling touches here and there, and it has been updated to version 5 of Fontawesome.

Blue Flash also includes several quality of life improvements for end-users. In particular, it is now possible to schedule periodic actions on VMs. Want to shutdown your VM every Friday at 5p.m. and start it on Monday 7p.m. just before work… We got you covered. Also, don’t to want accidentally terminate that important VM or want to freeze a Network; now you can set locks on common resources to prevent actions to be performed on them.

5.6. ends the major redesign on the vCenter driver started in 5.4. The new integration with vCenter features stability and performance improvements, as well as important new features like a extended multicluster support or a redesigned importation workflow with new Sunstone tabs as well as a new CLI.

New integrations also bring Docker management. Any OpenNebula cloud can import from the marketplace a VM prepared to act as docker engine, allowing the deployment of Docker applications on top. Also, an integration with Docker Machine is available to seamlessly manage docker engines without having to interact with OpenNebula APIs or interfaces.

Following our tradition this OpenNebula release is named after NGC 6905, also known as the Blue Flash Nebula, a planetary nebula in the constellation Delphinus. It was discovered by William Herschel in 1784.

OpenNebula 5.6 Blue Flash is considered to be a stable release and as such, and update is available in production environments.

The OpenNebula project would like to thank the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

The SetUID/SetGID functionality for VM Templates is funded by University of Louvain. The Ceph drivers enabling VM disks in the hypervisor local storage are funded by Flexyz B.V.

Relevant Links

Building an OpenNebula Private Cloud on AWS Bare Metal

Intro

Given the fact that AWS now offers a bare metal service as another choice of EC2 instances, you are now able to deploy virtual machines based on HVM technologies, like KVM, without tackling the heavy performance overhead imposed by nesting virtualization. This condition enables you to leverage the highly scalable and available AWS public cloud infrastructure in order to deploy your own cloud platform based on full virtualization.

Architecture Overview

The goal is to have a private cloud running KVM virtual machines, able to to communicate each other, the hosts, and the Internet, running on remote and/or local storage.

Compute

I3.metal instances, besides being bare metal, have a very high compute capacity. We can create a powerful private cloud with a small number of these instances roleplaying worker nodes.

Orchestration

Since OpenNebula is a very lightweight cloud management platform, and the control plane doesn’t require virtualization extensions, you can deploy it on a regular HVM EC2 instance. You could also deploy it as a virtual instance using a hypervisor running on an i3.metal instance, but this approach adds extra complexity to the network.

Storage

We can leverage the i3.metal high bandwidth and fast storage in order to have a local-backed storage for our image datastore. However, having a shared NAS-like datastore would be more productive. Although we can have a regular EC2 instance providing an NFS server, AWS provides a service specifically designed for this use case, the EFS.

Networking

OpenNebula requires a service network, for the infrastructure components (frontend, nodes and storage) and instance networks for VMs to communicate. This guide will use Ubuntu 16.04 as base OS.

Limitations

AWS provides a network stack designed for EC2 instances. Since you don’t really control interconnection devices like the internet gateway, the routers or the switches. This model has conflicts with the networking required for OpenNebula VMs.

  • AWS filters traffic based on IP – MAC association
    • Packets with a source IP can only flow if they have the specific MAC
    • You cannot change the MAC of an EC2 instance NIC
  • EC2 Instances don’t get public IPv4 directly attached to their NICs.
    • They get private IPs from a subnet of the VPC
    • AWS internet gateway (an unmanaged device) has an internal record which maps pubic IPs to private IPs of the VPC subnet.
    • Each private IPv4 address can be associated with a single Elastic IP address, and vice versa
  • There is a limit of 5 Elastic IP addresses per region, although you can get more non-elastic public IPs.
  • Elastic IPs are bound to a specific private IPv4 from the VPC, wich bounds them to specific nodes
  • Multicast traffic is filtered

If you try to assign an IP of the VPC subnet to a VM, traffic won’t flow because AWS interconnection devices don’t know the IP has been assigned and there isn’t a MAC associated ot it. Even if it would has been assigned, it would had been bound to a specific MAC. This leaves out of the way the Linux Bridges and the 802.1Q is not an option, since you need to tag the switches, and you can’t do it. VXLAN relies on multicast in order to work, so it is not an option. Openvswitch suffers the same link and network layer restriction AWS imposes.

Workarounds

OpenNebula can manage networks using the following technologies. In order to overcome AWS network limitations it is suitable to create an overlay network between the EC2 instances. Overlay networks would be ideally created using the VXLAN drivers, however since multicast is disabled by AWS, we would need to modify the VXLAN driver code, in order to use single cast. A simpler alternative is to use a VXLAN tunnel with openvswitch. However, this lacks scalability since a tunnel works on two remote endpoints, and adding more endpoints breaks the networking. Nevertheless you can get a total of 144 cores and 1TB of RAM in terms of compute power. The network then results in the AWS network, for OpenNebula’s infrastructure and a network isolated from AWS encapsulated over the Transport layer, ignoring every AWS network issues you might have. It is required to lower the MTU of the guest interfaces to match the VXLAN overhead.

In order to grant VMs Internet access, it is required to NAT their traffic in an EC2 instance with a public IP to its associated private IP. in order to masquerade the connection originated by the VMs. Thus, you need to set an IP belonging to the VM network the openvswitch switch device on an i3.metal, enablinInternet-VM intercommunicationg the EC2 instance as a router.

In order for your VMs to be publicly available from the Internet you need to own a pool of available public IP addresses ready to assign to the VMs. The problem is that those IPs are matched to a particular private IPs of the VPC. You can assign several pair of private IPs and Elastic IPs to an i3.metal NIC. This results in i3.metal instances having several available public IPs. Then you need to DNAT the traffic destined to the Elastic IP to the VM private IP. You can make the DNATs and SNATs static to a particular set  of private IPs and create an OpenNebula subnet for public visibility containing this address range. The DNATs need to be applied in every host in order to give them public visibility wherever they are deployed. Note that OpenNebula won’t know the addresses of the pool of Elastic IPs, nor the matching private IPs of the VPC. So there will be a double mapping, the first, by AWS, and the second by the OS (DNAT and SNAT) in the i3.metals :

 Elastic IP → VPC IP → VM IP→ VPC IP → Elastic IP

→ IN → Linux → OUT

 

 

Setting up AWS Infrastructure for OpenNebula

You can disable public access to OpenNebula nodes (since they are only required to be accessible from frontend) and access them via the frontend, by assigning them the ubuntu user frontend public key or using sshuttle.

Summary

  • We will launch 2 x i3.metal EC2 instances acting as virtualization nodes
  • OpenNebula will be deployed on a HVM EC2 instance
  • An EFS will be created in the same VPC the instances will run on
  • This EFS will be mounted with an NFS client on the instances for OpenNebula to run a shared datastore
  • The EC2 instances will be connected to a VPC subnet
  • Instances will have a NIC for the VPC subnet and a virtual NIC for the overlay network

Security Groups Rules

  1. TCP port 9869 for one frontend
  2. UDP port 4789 for VXLAN overlay network
  3. NFS inbound for EFS datastore (allow from one-x instances subnet)
  4. SSH for remote access

Create one-0 and one-1

This instance will act as a router for VMs running in OpenNebula.

  1. Click on Launch an Instance on EC2 management console
  2. Choose an AMI with an OS supported by OpenNebula, in this case we will use Ubuntu 16.04.
  3. Choose an i3.metal instance, should be at the end of the list.
  4. Make sure your instances will run on the same VPC as the EFS.
  5. Load your key pair into your instance
  6. This instance will require SGs 2 and 4
  7. Elastic IP association
    1. Assign several private IP addresses to one-0 or one-1
    2. Allocate Elastic IPs (up to five)
    3. Associate Elastic IPs in a one-to-one fashion to the assigned private IPs of one-0 or one-1

Create one-frontend

  1. Follow the same steps of the nodes creation, except
    1. Deploy a t2.medium EC2 instance
    2. It is also required SG1

Create EFS

  1. Click on create file system on EFS management console
  2. Choose the same VPC the EC2 instances are running on
  3. Choose SG 3
  4. Add your tags and review your config
  5. Create your EFS

After installing the nodes and the frontend, remember to follow shared datastore setup in order to deploy VMs using the EFS. In this case you need to mount the filesystem exported by the EFS on the corresponding datastore id the same way you would with a regular NFS server. Take a look to EFS doc to get more information.

Installing OpenNebula on AWS infrastructure

Follow Front-end Installation.

Setup one-x instances as OpenNebula nodes

Install opennebula node, follow KVM Node Installation. Follow openvswitch setup, don’t add the physical network interface to the openvswitch.

You will create an overlay network for VMs in a node to communicate with VMs in the other node using a VXLAN tunnel with openvswitch endpoints.

Create an openvswitch-switch. This configuration will persist across power cycles.

apt install openvswitch-switch

ovs-vsctl add-br ovsbr0

Create the VXLAN tunnel. The remote endpoint will be one-1 private ip address.

ovs-vsctl add-port ovsbr0 vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=10.0.0.12

This is one-0 configuration, repeat the configuration above in one-1 changing the remote endpoint to one-0.

Setting up one-0 as the gateway for VMs

Set the network configuration for the bridge.

ip addr add 192.168.0.1/24 dev ovsbr0

ip link set up ovsbr0

In order to make the configuration persistent

echo -e "auto ovsbr0 \niface ovsbr0 inet static \n       address 192.168.0.1/24" >> interfaces

Set one-0 as a NAT gateway for VMs in the overlay network to access the Internet. Make sure you SNAT to a private IP with an associated public IP.  For the NAT network.

iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j SNAT --to-source 10.0.0.41

Write the mappings for the public visibility in both one-0 and one-1 instance.

iptables -t nat -A PREROUTING -d 10.0.0.41 -j DNAT --to-destination 192.168.0.250

iptables -t nat -A PREROUTING -d 10.0.0.42 -j DNAT --to-destination 192.168.0.251

iptables -t nat -A PREROUTING -d 10.0.0.43 -j DNAT --to-destination 192.168.0.252

iptables -t nat -A PREROUTING -d 10.0.0.44 -j DNAT --to-destination 192.168.0.253

iptables -t nat -A PREROUTING -d 10.0.0.45 -j DNAT --to-destination 192.168.0.254

Make sure you save your iptables rules in order to make them persist across reboots. Also, check /proc/sys/net/ipv4/ip_forward is set to 1, opennebula-node package should have done that.

Defining Virtual Networks in OpenNebula

You need to create openvswitch networks with the guest MTU set to 1450. Set the bridge to the bridge with the VXLAN tunnel, in this case, ovsbr0.

For the public net you can define a network with the address range limited to the IPs with DNATs and another network (SNAT only) in a non-overlapping address range or in an address range containing the DNAT IPs in the reserved list. The gateway should be the i3.metal node with the overlay network IP assigned to the openvswitch switch. You can also set the DNS to the AWS provided DNS in the VPC.

Public net example:

Testing the Scenario BRIDGE = "ovsbr0"
 DNS = "10.0.0.2"
 GATEWAY = "192.168.0.1"
 GUEST_MTU = "1450"
 NETWORK_ADDRESS = "192.168.0.0"
 NETWORK_MASK = "255.255.255.0"
 PHYDEV = ""
 SECURITY_GROUPS = "0"
 VLAN_ID = ""
 VN_MAD = "ovswitch"

Testing the Scenario

You can import a virtual appliance from the marketplace to make the tests. This should work flawlessly since it only requires a regular frontend with internet access. Refer to the  marketplace documentation.

VM-VM Intercommunication

Deploy a VM in each node …

and ping each other.

Internet-VM Intercommunication

Install apache2 using the default OS repositories and view the default index.html file when accessing the corresponding public IP port 80 from our workstation.

First check your public IP

Then access the port 80 of the public IP. Just enter the IP address in the browser.

 

 

OpenNebulaConf 2018: Agenda Available

 

The OpenNebula Project is proud to announce the agenda and line-up of speakers for the seventh OpenNebula Conference to be held in Amsterdam on the 12-13 of November 2018.

OpenNebulaConf is your chance to get an up-close look at OpenNebula’s latest product updates, hear the project’s vision and strategy, get hands-on tutorials and workshops, and get lots of opportunities to network and share ideas with your peers. You’ll also get to attend all the parties and after-parties to keep the networking and the good times going long after the show floor closes for the day.

Keynotes

The agenda includes three keynote speakers:

Educational Sessions 

This year we will have two pre-conference tutorials:

Community Sessions

We had a big response to the call for presentations. Thanks for submitting a talk proposal!.

Like in previous editions, we will have a single track with 15-minute talks, to keep all the audience focused and interested. We have given our very best to get the perfect balance of topics.

We will also have a Meet the Experts sessions providing an informal atmosphere where delegates can interact with experts who will give their undivided attention for knowledge, insight and networking; and a session for 5-minute lightning talksIf you would like to talk in these sessions, please contact us!

Besides its amazing talks, there are multiple goodies packed with the OpenNebulaConf registration. You have until September 15th to get your OpenNebulaConf tickets for the deeply discounted price of just €400 (plus taxes) apiece. However, space is limited, so register asap.

We are looking forward to welcoming you personally in Amsterdam!

 

vOneCloud 3.0.7 released!

We want to let you know that OpenNebula Systems has just announced the availability of vOneCloud version 3.0.7.

vOneCloud 3.0.7 is based in OpenNebula 5.4.15 and as such it includes all the bug fixes and functionalities introduced in 5.4.15: OpenNebula 5.4.15 Release Notes.

vOneCloud 3.0.7 is a maintenance release with the following minor improvements:

  • Better updateconf, check VM state to allocate a new cluster VNC port
  • Better timeouts for xml-rpc clients
  • Fix history records when VMs are imported in POWEROFF state
  • Changed cpu mode and fallback
  • Filter in CLI commands now accept != operator
  • Improved Sunstone text fields

Also 3.0.7 feature the following bugfixes:

  • vCenter driver is capable of import network names with slashes
  • Fix check in updateconf for non-running VMs
  • Changing overcommitment on a host updates other hosts too
  • Fixed bug with updateconf and vnc port
  • Sunstone reloads the page with a group change of a user
  • Memory overcommitment doesn’t support float values
  • Changes in VM Template not saved during update
  • Rollback datastore quotas. Add datastore quotas to one.vmtemplate.instantiate
  • Disk SIZE is not a valid integer
  • Do not reset resizes and quotas after a recover –recreate
  • AR size change in reservations should be disabled in Sunstone
  • Multiple DISK attributes into VM Template section
  • VM created w/ wrong disk size (size on instantiate)
  • Error in group create/update
  • ActionManager threads counter not decreased
  • Groups shouldn’t be cached in Sunstone

Relevant Links

OpenNebula Newsletter – June 2018

Get an overview of the work and achievements during the last month, along with community updates, contributions and new integrations.

We are getting used to having great news every month, and June has brought us many many joys. Keep reading to discover why we had such a great period.

As a quick reminder, we would like to state that Santa Clara’s TechDay is the next big event in our agenda. Do not miss the chance to come!

Technology

OpenNebula 5.6 is here!!

Well, almost. We have released OpenNebula ‘Blue Flash’ Release Candidate which is just one step from being completely ready. You know we like to do things right even if it takes a bit longer. You can help us downloading this RC version, testing and reporting issues through our GitHub project or our community forum.

The OpenNebula team is now set to bug-fixing mode. Note that this is a Release Candidate aimed at testers and developers to try the new features, and send a more than welcomed feedback for the final release.

Of course we didn’t forget about our stable release OpenNebula 5.4 for which we have uploaded a maintenance release 5.4.13 including all the bug fixes from previous hotfix packages. You can download this built packages here.

Moreover we got some bugs solved in our latests hotfix updates OpenNebula 5.4.14 and 5.4.15

You know we, at OpenNebula, love to make your life easier, that is why we are so excited to present you the new oneprivision tool. Yes, as you might have guessed, this will make your host provisioning much quicker and easier. By just running a few commands you will be able to have a complete bare-metal host configured and running perfectly integrated with your cloud front-end. Our engineering team has tested this tool with Packet bare-metal servers and it works as a charm!

You can read everything about this new provisioning tool thanks to Alejandro Huertas, who has written this article containing a description and a how-to-use guide.

Community

It is impressive to be almost in July with the summer started and see how some of you are still working on bringing some great features to OpenNebula, instead of having a relaxing bath on the freshwater by the sea. We truly appreciate ;)

Before listing all the contributions of the community I would like to remind you that, until the day we release the stable version of OpenNebula 5.6, you still have time to help with the translations. So if you want OpenNebula to speak your language, contribute through this link. Contributions of any size are welcomed so don’t be shy.

To start with, we want to let you know that you now dispose of the latest Devuan ASCII free guest images ready to import in your OpenNebula infrastructures. You can check this neat tutorial for details on where to download and an installation guide.

Having new technologies integrated with OpenNebula is one of the biggest pleasures for the OpenNebula Team, this is why we are very excited to present the new TM driver developed by our colleagues at LizardFS. With this driver OpenNebula and LizardFS storage ar fully blended. For further information visit the following link.

Next, we would like to thanks @bestopensource for their great job on promoting open source projects like OpenNebula. This week they reminded all OpenNebula administrators that they can use this collection of scripts created by one of our most experienced and active users. Find these scripts here.

Also in Twitter Lorenzo Faleschini, CTO at NodeWeaver points out the easiness of working with OpenNebula:

 

Playing around with @nodeweaver at @HigecoSrl.
Devs are enjoying the ability to boot entire infrastructures to test their stack with a couple of clicks.
@opennebula Flow Happiness is in the air.

 

Outreach

Still a long way to the OpenNebulaConf 2018, however, we keep working on the organization to get interesting talks regarding open source projects and OpenNebula use cases in real world scenarios. Remember that if you register before the 15th of September you will have a 20% discount!

We know some of you can’t wait until November to see us but do not worry, we will be present in many events this summer.

If you live in the US this is your lucky summer. First you will be able to visit us at the VMworld US in Las Vegas on the 26-30th of August and then we will be present at Santa Clara’s OpenNebula TechDay hosted by Hitachi in their headquarters on the 30th of August. This TechDay is costless so feel free to register here and join us. Visit the event’s website to get all the information related.

Of course, as every year, we will have a booth at VMworld EU 2018 that will be held on the 5-8th of November in Barcelona presenting some of the greatest advances coming in OpenNebula Blue Flash.

We have some other events coming in between like Frankfurt TechDay hosted by LINBIT. Check out our official agenda to see more events and updates.

Okay, enough talk-shop!

I want to grasp this newsletter to wish you all, happy and long holidays. See you next month, with a nicer tan I hope ;)

OpenNebula 5.6 ‘Blue Flash’ RC is Out!

OpenNebula 5.6 (Blue Flash) is the fourth major release of the OpenNebula 5 series. A significant effort has been applied in this release to enhance features introduced in 5.4 Medusa, while keeping an eye in implementing those features more demanded by the community. A massive set of improvements happened at the core level to increase robustness and scalability, and a major refactor happened in the vCenter integration, particularly in the import process, which has been streamlined. Virtually every component of OpenNebula has been reviewed to target usability and functional improvements, trying to keep API changes to a minimum to avoid disrupting ecosystem components.

Following our tradition this OpenNebula release is named after NGC 6905, also known as the Blue Flash Nebula, a planetary nebula in the constellation Delphinus. It was discovered by William Herschel in 1784.

The OpenNebula team is now set to bug-fixing mode. Note that this is a RC release aimed at testers and developers to try the new features, and send a more than welcomed feedback for the final release. Note that being a RC there is no migration path from the previous stable version (5.4.13) nor migration path to the final stable version (5.6.0).

The OpenNebula project would like to thank the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

The SetUID/SetGID functionality for VM Templates is funded by University of Louvain. The Ceph drivers enabling VM disks in the hypervisor local storage are funded by Flexyz B.V.

Relevant Links

A Sneak Preview of the Upcoming Features for Cloud Disaggregation

During the last months we have been working on a new internal project to enable disaggregated private clouds. Our aim is to provide the tools and methods needed to grow your private cloud infrastructure with physical resources, initially individual hosts but eventually complete clusters, running on a  remote bare-metal cloud providers.

Two of the use cases that will be supported by this new disaggregated cloud approach will be:

  • Distributed Cloud Computing. This approach will allow the transition from centralized clouds to distributed edge-like cloud environments. You will be able to grow your private cloud with resources at edge data center locations to meet latency and bandwidth needs of your workload.
  • Hybrid Cloud Computing. This approach works as an alternative to the existing hybrid cloud drivers. So there is a peak of demand and need for extra computing power you will be able to dynamically grow your underlying physical infrastructure. Compared with the use of hybrid drivers, this approach can be more efficient because it involves a single management layer. Also it is a simpler approach because you can continue using the existing OpenNebula images and templates. Moreover you always keep complete control over the infrastructure and avoid vendor lock-in.

 

There are several benefits of this approach over the traditional, more decoupled hybrid solution that involves using the provider cloud API. However, one of them stands tall among the rest and it is the ability to move offline workload between your local and rented resources. A tool to automatically move images and VM Templates from local clusters to remote provisioned ones will be included in the disaggregated private cloud support.

In this post, we show a preview of a prototype version of “oneprovision”, a tool to deploy and add to your private cloud instances new remote hosts from a bare-metal cloud provider. In particular, we are working with Packet to build this first prototype.

Automatic Provision of Remote Resources

A simple tool oneprovision will be provided to deal with all aspects of the physical host lifecycle. The tool should be installed on the OpenNebula frontend, as it shares parts with the frontend components. It’s a standalone tool intended to be run locally on the frontend, it’s not a service (for now). The use is similar to what you may know from the other OpenNebula CLI tools.

Let’s look at a demo how to deploy an independent KVM host on Packet, the bare metal provider.

Listing

Listing the provisions is a very same as listing of any other OpenNebula objects.

    $ onehost list
    ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
    0 localhost default 0 0 / 400 (0%) 0K / 7.5G (0%) on
    $ oneprovision list
    ID NAME            CLUSTER RVM PROVIDER STAT

Based on the listings above, we don’t have any provisions and our resources are limited just on the localhost.

Provision

Adding a new host is as simple as running a command. Unfortunately, the number of parameters required to specify the provision would be too much for the command line. That’s why most of the details are provided in a separate provision description file, a YAML formatted document.

Example (packet_kvm.yaml):

---

# Provision and configuration defaults
 provision:
     driver: "packet"
     token: "********************************"
     project: "************************************"
     facility: "ams1"
     plan: "baremetal_0"
     billing_cycle: "hourly"

configuration:
      opennebula_node_kvm_param_nested: true

##########

# List of devices to deploy with
 # provision and configuration overrides:
 devices:
     - provision:
            hostname: "kvm-host001.priv.ams1"
            os: "centos_7"

Now we use this description file with the oneprovision tool to allocate new host on the Packet, seamlessly configure the new host to work as the KVM hypervisor, and finally add into the OpenNebula.

    $ oneprovision create -v kvm -i kvm packet_kvm.yaml
    ID: 63

Now, the listings show our new provision.

    $ oneprovision list
    ID NAME            CLUSTER RVM PROVIDER STAT
    63 147.75.33.121   default 0 packet   on

    $ onehost list
    ID NAME            CLUSTER RVM ALLOCATED_CPU      ALLOCATED_MEM STAT
    0 localhost       default 0  0 / 400 (0%) 0K / 7.5G (0%) on
    63 147.75.33.121   default 0 0 / 400 (0%)     0K / 7.8G (0%) on

You can also check your Packet dashboard to see the new host.

Host Management

The tool provides a few physical host management commands. Although you can still use your favorite UI, or provider specific CLI tools to meet the same goal, the oneprovision also deal with the management of the host objects in the OpenNebula.

E.g., if you power off the physical machine via oneprovision, the related OpenNebula host is also switched into the offline state, so that the OpenNebula doesn’t waste time with monitoring the unreachable host.

You will be able to reset the host.

    $ oneprovision reset 63

Or, completely power off and resume any time later.

    $ oneprovision poweroff 63
    $ oneprovision list
    ID NAME            CLUSTER RVM PROVIDER STAT
    63 147.75.33.121   default 0 packet   off

    $ oneprovision resume 63

    $ oneprovision list
    ID NAME            CLUSTER RVM PROVIDER STAT
    63 147.75.33.121   default 0 packet   on 

Terminate

When the provision isn’t needed anymore, it can be deleted. The physical host is both released on the side of the bare metal provider and the OpenNebula.

    $ oneprovision delete 63

    $ oneprovision list
    ID NAME            CLUSTER RVM PROVIDER STAT

    $ onehost list
    ID NAME            CLUSTER RVM ALLOCATED_CPU      ALLOCATED_MEM STAT
    0 localhost       default 0  0 / 400 (0%) 0K / 7.5G (0%) on

Stay tuned for the release of this first feature of our new cloud disaggregation project, and as always we will be looking forward to your feedback!

TechDay Frankfurt Hosted by LINBIT – 26SEPT18

We are glad to announce that our friends from LINBIT will organize the first OpenNebula TechDay in Frankfurt.

As usual in this TechDay you will be able to enjoy a full experience around cloud and open source projects. There will be talks and presentations from experienced people working with LINBIT and OpenNebula.

Yes! A 4 hour long Hands-on tutorial will be conducted by OpenNebula experts so that the attendees are able to see in action and manage their own private  cloud!

 

Check the following links for registration and agenda information:

 

New Maintenance Release: OpenNebula 5.4.13

The OpenNebula team is pleased to announce the availability of OpenNebula 5.4.13, a new maintenance release of the 5.4.x series.This version fixes multiple bugs and add some minor features, and with the recent release of 5.6 beta, closes the 5.4.x series.

Check the release notes for the complete set of changes.

Relevant Links