Blog Article:

Using LXD and KVM on the Same Host

Daniel Clavijo

Cloud Engineer at OpenNebula

Sep 2, 2019

LXD and KVM are hypervisors that are able to work simultaneously in the same host due to the different nature of the virtual instances which they create. KVM creates virtual machines using full virtualization while LXD creates a virtual run-time environment on top of the OS kernel – the former one requiring CPU support and the latter only a suitable kernel.

OpenNebula 5.8 has virtualization drivers for both LXD and KVM, and despite both being able to peacefully coexist, the current architecture treats every virtualization node as a unique hypervisor. When a VM is queued on the scheduler, it is then deployed on a suitable host, and the driver used to deploy the VM is determined by the type of hypervisor of the host. At the moment, there is an open issue describing the situation, which coincidentally will require changes to several logical components of OpenNebula. Fortunately, it is possible to overcome the situation with a very simple “work-around”.

When a host is added to an OpenNebula frontend, it is required to enter a hostname that is associated with that host; that could either be its IP address or a name that resolves to that IP address. With that in mind, the frontend may refer to the host with several names. You can add several names in the DNS server the frontend or in the /etc/hosts file.

In this post we will create a LXD single server setup using miniONE and then we will add that same host as a KVM node.

Hands-on

Deploy the LXD node using miniONE. Note – there is a command line extra argument for the LXD flavor. Make sure you use a Ubuntu host since the LXD driver is only supported on Ubuntu distros.

Check the hosts.

root@LXDnKVM:~# onehost list
ID NAME CLUSTER TVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 localhost default 0 0 / 200 (0%) 0K / 1.9G (0%) on
root@LXDnKVM:~# onehost show 0 | grep MAD
IM_MAD : lxd
VM_MAD : lxd
IM_MAD="lxd"
VM_MAD="lxd"

Add the kvm name to the /etc/hosts file.

10.10.0.45 LXDnKVM  # one-contextd
127.0.0.1 localhost kvm

Now the oneadmin user already is able to access via ssh to the host, but you need you make sure the kvm host is a known host.

oneadmin@LXDnKVM:~$ ssh kvm
The authenticity of host 'kvm (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:dwPyCUgSN38eh9kL2cn/l2PQ67aUVOjt37JVceLCbZ0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kvm' (ECDSA) to the list of known hosts.

Now add the “new” host.

oneadmin@LXDnKVM:~$ onehost create kvm -v kvm -i kvm
oneadmin@LXDnKVM:~$ onehost list
ID NAME CLUSTER TVM ALLOCATED_CPU ALLOCATED_MEM STAT
1 kvm default 0 0 / 200 (0%) 0K / 1.9G (0%) on
0 localhost default 0 0 / 200 (0%) 0K / 1.9G (0%) on

Let’s create a VM and a container.

oneadmin@LXDnKVM:~$ onetemplate instantiate 0
VM ID: 0
oneadmin@LXDnKVM:~$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K kvm 0d 00h00
oneadmin@LXDnKVM:~$ onetemplate instantiate 0
VM ID: 1
oneadmin@LXDnKVM:~$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
1 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K localhost 0d 00h00
0 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K kvm 0d 00h00
oneadmin@LXDnKVM:~$ virsh list
 Id    Name                           State
1     one-0                          running
oneadmin@LXDnKVM:~$ lxc list
+-------+---------+---------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+---------------------+------+------------+-----------+
| one-1 | RUNNING | 172.16.100.3 (eth0) | | PERSISTENT | 0 |
+-------+---------+---------------------+------+------------+-----------+

Note that we instantiated the template twice and the scheduler created the 1st time as a VM and the 2nd time a container due to the last added host having resources allocated and the first one being empty.

Tips

  • You can tweak the capacity section of the hosts to plan resource allocation and implement the desired resource quota for each hypervisor.
  • You can create a LXD cluster out of your existing KVM cluster.
  • Since the LXD driver is able to deploy KVM images and KVM VM templates, make sure you specify on the templates where do you want the VM/container.

Hope this is helpful! Send your feedback.

2 Comments

  1. mg

    That are great news.

    It is also possible to have FireCreacker and KVM on the same node ?

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *