FaSS – Fair Share Scheduler for OpenNebula

Do you operate a small Cloud infrastructure and need to optimise the centre occupancy? Then FaSS, a Fair Share Scheduler for OpenNebula (ONE), will address your issues!

FaSS is a product of the INDIGO-DataCloud project and has been developed to boost small Cloud infrastructures, like those used for scientific computing, which often operate in a saturated regime: a condition that constrains the free auto-scaling of applications. In those cases, tenants typically pay a priori for a fraction of the overall resources and are assigned a fixed quota accordingly. Nevertheless, they might want to be able to exceed their quota and to profit from additional resources temporarily left unused by other tenants. Within this business model, one definitely needs an advanced scheduling strategy.

What FaSS does is to satisfy resource requests according to an algorithm that prioritises tasks according to

  • an initial weight;
  • the historical resource usage of the project.
Software design

The software was designed to be as little intrusive as possible in the ONE code, and interacts with ONE exclusively through its XML-RPC interface. Additionally, the native ONE scheduler is preserved for matching requests to available resources.

FaSS is composed of five functional components: the Priority Manager (PM), a set of fair-share algorithms, Terminator, the XML-RPC interface and the database.

  • The PM is the main module. It periodically requests the list of pending Virtual Machines (VMs) to ONE and re-calculates the priorities in the queue by interacting with an algorithm module of choice.
  • The default algorithm in FaSS v 1 is Slurm’s MultiFactor.
  • Terminator runs asynchronously with respect to the PM. It is responsible for removing from the queue VMs in pending state for too long, as well as terminating, suspending or powering-off running VMs after a configurable Time-to-Live.
  • The XML-RPC server of FaSS intercepts the calls from the First-In-First-Out scheduler of ONE and sends back the reordered VMs queue.
  • FaSS database is InfluxDB. It stores the initial and recalculated VM priorities and some additional information for accounting purposes. No information already present in the ONE DB is duplicated in FaSS.

How can I install FaSS?

Please find the detailed instructions in GitHub.

The only prerequisites are:

  • To install ONE (versions above 5.4 operate with FaSS above v1.2, if you run a previous version of ONE you need FaSS v1.1 or before);
  • To install InfluxDB and create fassdb.

All the other requested packages are installed automatically with the rpm.

You can then install FaSS as root user:
$ cd /tmp/
$ git clone
$ cd one-fass
$ cd rpms
$ yum localinstall one-fass-service-v1.3-1.3.x86_64.rpm

The last step is to adjust the configuration file of the ONE scheduler, to allow it to point at the FaSS endpoint instead in:
ONE_XMLRPC = "http://localhost:2633/RPC2"
ONE_XMLRPC = "http://localhost:2637/RPC2"

Is it difficult to use?

Not at all! A detailed usage description can be found in GitHub.

  1. Edit the initial shares for every user:
    $ cd /tmp/one-fass/etc
    and edit the file:
  1. Start FaSS:
    systemctl start fass

Now FaSS is ready and working!

Additional features

There are few additional features that allow you to keep your Cloud infrastructure clean:

  • You can set your VMs to be dynamic and be terminated after a specific Time-to-Live instantiating with:
    $ onetemplate instantiate <yourtemplateid> --raw static_vm=0
  • Instead of terminating your VMs they can be powered-off, suspended or rebooted changing the action to be performed in:
What’s next?

We are implementing several new features in FaSS, for example the possibility of setting the Time-to-Live per user. We are also planning to test several new algorithms. So stay tuned!

Berta – Managing The Lifecycle of Virtual Machines

Managing resource usage in private or community clouds where the provisioning model is not pay-per-use oriented is often complicated. Resources are available to end users seemingly for free which means that one of the main motivators for responsible resource usage — a monthly bill — is missing. This can eventually lead to a situation in which the pool of available resources is depleted and a large portion of existing allocations is underused, unused, or entirely forgotten. For a resource provider interested in offering fixed amount of resources to the largest possible number of users, this issue needs to be addressed. Read more

Update of VMware Cloud Reference Architecture

A year ago OpenNebula Systems published the VMware Cloud Reference Architecture, a blueprint to guide IT architects, consultants, administrators and field practitioners in the design and deployment of public and private clouds based on OpenNebula on top of VMware vCenter. This reference architecture is intended for organizations with existing VMware environments or expertise who want to limit changes to their underlying VMware infrastructure, but see benefits in a common provisioning layer via OpenNebula to control compute workloads and want to take a step toward liberating their stack from vendor lock-in.

Many things have changed since that document was published. This is a brief summary of what’s new and ready for you:

  • OpenNebula now allows to upload, clone and delete VMDK files.
  • VM importing workflow has been greatly improved through Sunstone, making it easier to import your existing workload into OpenNebula.
  • Resource pools defined in vCenter are supported by OpenNebula so available memory and CPU can be partitioned. When launching a VM from OpenNebula, a resource pool can be selected automatically or the user can choose one.
  • When a VM is instantiated from a VM Template, the datastore associated can be chosen. If DRS is enabled, then vCenter will pick the optimal Datastore to deploy the VM.
  • New disks can be hot-plugged and OpenNebula can be informed from erasing the VM disks if a shutdown or cancel operation is applied to a VM, so users won’t lose data accidentally.
  • Support for vCenter customization specifications, as a complementary alternative to contextualization.
  • Multi vCenter cluster can be now defined in a single VM Template definition.
  • Control how disks are managed in vCenter, through the KEEPS_DISKS_ON_DONE template variable which will help you to protect users data against accidental deletions.
  • Datastores in a Storage DRS can be used as individual datastores by OpenNebula.
  • A bandwidth limit per VM network interface can be applied. VM’s network usage information is now gathered from vCenter.
  • It’s possible to access the OneGate server from vCenter VMs since the onegate token is passed through to the VM.
  • And last but not least, cool features added to Sunstone: a smoother vCenter’s resource import, the Cloud View functionality has been extended, new tags for resources.

This blueprint has been created from the collective information and experiences from hundreds of users and cloud client engagements so your feedback is extremely valuable.

More features are continuously being added, OpenNebula is a project in constant evolution, so stay tuned and do not forget to send us your feedback!

Open Cloud Free Session – Opennebula Barcelona User Group



Date and Time: Mon, October 24, 2016 2:00 PM – 5:00 PM

OpenNebula Barcelona User Group is a gathering of our users in Barcelona area to share best practices, discuss technical questions, network, and learn from each other and enjoy. Direct Link

Taking advantage of the Opennebula conference in Barcelona, its user group in collaboration with the Opennebula project and CSUC organizes a free open cloud session to introduce the project, share new local developments and use cases with the community and any people interested in Open Cloud topics (Free Registration).

Agenda: (Free Registration -> Register here and reserve your seat)

14:00 Welcome/Bienvenida/Benvinguda
14:05 Opennebula Project: Open Cloud in essence – Dr. Ruben Santiago Montero (Chief Technical Officer & Co-Founder)
14:30 Cloud Bursting and VMware: New Opennebula VCLOUD driver  – Jordi Guijarro (Cloud & Security Manager – CSUC)
14:50 Barcelona Users Group
15:00 ACB League use case – Joaquin Villanueva (Director of Media Technology)
15:20 UPC Research Lab (RDLAB) use case – Gabriel Verdejo (IT Manager)
15:40 University of Valencia use case – Israel Ribot (System Administrator)
16:00 Coffee & Networking
16:30 EOF

Free Registration -> Register here and reserve your seat

ONEBCN Team in collaboration with CSUC

New DRBD Manage add-on for Highly Available storage

DRBD backed storage is now integrated into OpenNebula with the new DRBD Manage addon.

DRBD provides transparent, real-time replication of block devices without the need for specialty hardware. DRBD Manage is an administrative tool which facilitates easy Logical Volume Management (LVM) and configuration files for multi-node DRBD clusters.

With the DRBD Manage driver, create each new image for your virtual infrastructure as a DRBD volume. Volumes intelligently balance the load on your storage nodes. Alternatively, assign volumes to the specific nodes that you want in use. This is a simple scale-out storage solution, supporting the capability to add new nodes to your storage cluster at any time. This, combined with the flexibility of LVM, allows DRBD to keep up with your ever-increasing storage requirements.

DRBD 9 and DRBD Manage allow transferring Images to Virtualization hosts via the DRBD Transport protocol. This allows images to be available nearly instantly on host nodes without requiring them to have storage space available.

Below is a diagram showing a simple OpenNebula cluster using the DRBD Manage Driver. This cluster has a Front End, two storage nodes, and a single virtualization host. The host has two images attached to it via DRBD Transport. Both images are deployed to double redundancy and are being replicated in real time across both storage nodes. This means that the failure of a single storage node will not disrupt IO on the host. All nodes have a local copy of DRBD Manage’s control volume.



  • Data redundancy
  • Automatic fail-overs if a storage node fails
  • Database and high I/O application compatible
  • Transfers images over the network with DRBD Transport
  • Quickly attaches images to VMs
  • Fast image clones

Creating Customized Images

One of the steps when preparing an OpenNebula installation is the creation of Virtual Machine images for base Operating Systems or appliances. Some of these images can be downloaded from the marketplace but you may need an OS that is not in the marketplace or the images must be customized in some other way.

I’m going to describe an automated way to customize the base images provided by the Linux distributions using the software libguestfs.

The software libguestfs comes with tools to create and modify Virtual Machine images in a number of formats that qemu understands. Some of these utilities let us add or delete files inside the images or execute scripts using the image filesystem as root.

The first step is getting an image from the distribution web page. I usually get these images as they are very small and don’t have extra software. For this example we will use CentOS 7. Head to and download the image CentOS-7-x86_64-GenericCloud.qcow2c.

One of the customizations we have to do to this image is uninstall the cloud-init package that comes by default with that image and install OpenNebula context package. The easiest way to install extra packages that are not in a repository is to add them into a CDROM that will provided to the customization tool. So head to and download the latest context package.

To create the CDROM image we can use genisoimage. Remember to add a label so it’s easier to mount. Here we are going to use the label PACKAGES:

  • Copy the packages to a directory, for example packages
  • Execute genisoimage to create the iso that contains those files:
$ genisoimage -o packages.iso -R -J -V PACKAGES packages/

Now we need to prepare a script with the customizations to be done in the image. For example:


# Install opennebula context package
rpm -Uvh /mnt/one-context*rpm

# Remove cloud-init and NetworkManager
yum remove -y NetworkManager cloud-init

# Install growpart and upgrade util-linux, used for filesystem resizing
yum install -y epel-release --nogpgcheck
yum install -y cloud-utils-growpart --nogpgcheck
yum upgrade -y util-linux --nogpgcheck

# Install ruby for onegate tool
yum install -y ruby

Instead of modifying the original image downloaded we can use a feature of qcow2 image that is creating a new image that is based on another one. This way we keep the original image in case we are not happy with the modifications or we want to create another image with different customizations.

$ qemu-img create -f qcow2 -b CentOS-7-x86_64-GenericCloud.qcow2c centos.qcow2

Now all is prepared to customize the image. The command we are going to use is virt-customize. It can do a lot of modifications to the image but we are only going to do two. Execute the previous script and disable root password, just in case. The command is this one:

$ virt-customize -v --attach packages.iso --format qcow2 ---attach centos.qcow2 ---run -root-password disabled

It attaches two images, the iso image with the packages and the OS hard disk, executes that we previously created and disables root password.

After the command is run the image centos.qcow2 contains the modifications we did to the original image. Now we can convert it to any other format we need (for example vmdk) or to a full qcow2 image, that it, does not depend on any other one. Here are the commands to convert it to qcow2 (compatible with old qemu versions) and vmdk:

$ qemu-img convert -f qcow2 -O qcow2 -o compat=0.10 centos.qcow2 centos-final.qcow2
$ qemu-img convert -f qcow2 -O vmdk centos.qcow2 centos-final.vmdk

There are other customizations you can do, for example set a fixed password with --root-password password:r00tp4ssw0rd. You can also use virt-sparsify to discard the blocks that are not used by the filesystem. Check the libguestfs web page to learn about all the possibilities.

You can also take a look at the presentation I gave about this topic in the CentOS dojo held in Barcelona this year:

Create a context ready VyOS Image for OpenNebula

Today I’m writing about the steps I’ve followed when creating a KVM VyOS image for OpenNebula that accepts some contextualization variables.

I hope this post helps users to extend the contextualization support and create your own VyOS appliances and share them in the marketplace, e.g why don’t you try to follow these steps to create an image for Xen and VMWare?

The first part of the post will help you to create a KVM image using Sunstone, the second part explains how we can add contextualization to our VyOS image.

Let’s begin!

First part – Creating a VyOS KVM image

This is easy for most of the users, however I think it’s always good to show these steps to newcomers. These are only my recommendations, they’re not mandatory, I’m just letting you know what works for me.

  1. First, download the latest stable image for virtual 64 bits (or 32 bits) from VyOS adding the ISO as a virtio CDROM image (vd prefix).
  2. Let’s create a 2GB Hard Disk image. I use a persistent, empty, datablock to create a VirtIO HDD. Once the HDD is created, remember to change the TYPE from DATABLOCK to OS.VyOS_HDD
  3. Once we have an ISO image and a HDD it’s time to create a template. In my case I add a network interface so I can later configure VyOS using SSH. Using the wizard these are the most important parts I configure:
    • General -> Memory. We’ll need at least 256 MB RAM (512 MB recommended).
    • General -> Hypervisor. KVM in my example :-D
    • Graphics -> VNC.
    • Network. When creating a NIC I use the advanced options and select virtio for the NIC Model.
    • OS Booting. Arch -> x86_64
    • OS Booting 1st Boot -> CDROM. It’s quite important to ensure the VM will boot the CD first unless you want a “AMD64 – No bootable device error” error.
    • OS Booting 2nd Boot -> HD
  4. After our template is ready let’s instantiate it!. If everything works fine we’ll have access to the console using VNC.VyOS_VNC
  5. Vyos default username and password are both vyos. Once we’re in, we can install VyOS in our HDD image using the following command:
    install image
  6. The installation wizard will ask some questions:
    • VyOS image to a local hard drive. Would you like to continue? (Yes/No) [Yes]:
    • Partition (Auto/Parted/Skip) [Auto]:
      I found the following drivers on your system:
      vda 2097MB
      vdb 247MB
      Install the image on? [vda]:
    • This will destroy all data on /dev/vda.

      Continue? (Yes/No) [No]: Yes

    • How big of a root partition should I create? (1000MB – 2097MB) [2097]MB:

      Creating filesystem on /dev/vda1: OK

    • What would you like to name this image? [1.1.5]
    • I found the following configuration files:…
      Which one should I copy to vda? [/config/config.boot]:
    • Enter password for user ‘vyos’:
    • Which drive should GRUB modify the boot partition on? [vda]:
  7. Once the system is installed we can run the poweroff command:
  8. The HDD is ready so we only have to update our template removing the CDROM and selecting HD as the 1st Boot device in the OS Booting tab. Then we can instantiate the VyOS template again.
  9. In the second part I’ll use SSH to run some commands so I first enable a NIC and start the SSH service using the following VyOS commands. In my example I’m using the IP address.
    set interfaces ethernet eth0 address
    set service ssh
  10. Now we have a VyOS image with SSH and we’re ready to start with part two.

Second part – Adding the contextualization script

VyOS is a fork of the Vyatta Community Edition. Vyatta’s forum was full of useful information and it helped me to find answers to “where should I start to add contextualization?”. Unfortunately, when Brocade acquired Vyatta, the forum dissapeared, so I don’t know really who should receive credit for the info I gathered… I only can say thanks to Vyatta’s community and wishing the best for the new VyOS community.

All right. Let’s try to explain the magic.

If we add to VyOS a script called vyatta-postconfig-bootup.script, VyOS will run any command in that script, once VyOS is ready and the configuration has been loaded. In this script we try to mount the OpenNebula’s CDROM containing the script which will load the contextualization environment variables (please see the official OpenNebula’s documentation) to get a deeper understanding of contextualization. In any case, VyOS will launch the bash script afterwards.

The (it can be renamed, of course) uses the vyatta-cfg-cmd-wrapper command to encapsulate VyOS commands that will alter the configuration. The wrapper commands must be declared between a begin, a commit and, of course, an end. Using one of the OpenNebula’s contextualization scripts as a template, I’ve added VyOS command that will be executed if some context variables are ready (e.g the IP and MASK…). I think this script it’s quite easy to follow but don’t hesitate to send your doubts and feedback to add a FAQ to this post.

Hands on.

  1. We’ll need two bash scripts that I’ve uploaded to my Github account. You can clone the repo:
    git clone
    cd vyos-onecontext
  2. Now we’ll scp the files to our VyOS VM using the vyos username and the vyos password (unless you’ve changed it during the installation). My VyOS router is listening on the address.
    scp vyos@
    scp vyatta-postconfig-bootup.script vyos@
  3. Using SSH and sudo we’ll move the scripts to the right directories:VyOS_SSH
    ssh vyos@
    sudo mv /tmp/vyatta-postconfig-bootup.script /opt/vyatta/etc/config/scripts/vyatta-postconfig-bootup.script
    sudo mv /tmp/ /opt/vyatta/sbin/
  4. In order to use the contextualization, we must first remove SSH service and the ethernet address and any changes we’ve made to VyOS config:
    delete service ssh
    delete interfaces ethernet eth0
  5. We can edit the file /boot/grub/grub.cfg (sudo vi /boot/grub/grub.cfg) and delete the following lines:
    serial --unit=0 --speed=9600
    terminal_output --append serial
    echo -n Press ESC to enter the Grub menu...
    if sleep --verbose --interruptible 5 ; then
    terminal_input console serial
    menuentry "VyOS 1.1.5 linux (Serial console)" {
    linux /boot/1.1.5/vmlinuz boot=live quiet vyatta-union=/boot/1.1.5 console=tty0 console=ttyS0,9600
    initrd /boot/1.1.5/initrd.img
    menuentry "VyOS 1.1.5 linux (USB console)" {
    linux /boot/1.1.5/vmlinuz boot=live quiet vyatta-union=/boot/1.1.5 console=tty0 console=ttyUSB0,9600
    initrd /boot/1.1.5/initrd.img
    menuentry "Lost password change 1.1.5 (Serial console)" {
    linux /boot/1.1.5/vmlinuz boot=live quiet vyatta-union=/boot/1.1.5 selinux=0 console=tty0 console=ttyS0,9600 init=/opt/vyatta/sbin/standalone_root_pw_reset
    initrd /boot/1.1.5/initrd.img
    menuentry "Lost password change 1.1.5 (USB console)" {
    linux /boot/1.1.5/vmlinuz boot=live quiet vyatta-union=/boot/1.1.5 selinux=0 console=tty0 console=ttyUSB0,9600 init=/opt/vyatta/sbin/standalone_root_pw_reset
    initrd /boot/1.1.5/initrd.img

    Removing the console, will help us to avoid the following error-> INIT: Id “TO” respawing too fast: disabled for 5 minutes. Thanks to this post!

  6. Unless we’ve added a KVM serial port we can delete the console:
    delete system console
  7. Finally we can delete the bash history, commit and save the changes:
    > /home/vyos/.bash_history

Please remember: Once you reboot your image, the contextualization script will try to autoconfigure your VyOS router, however no changes are saved unless you explicitly use the save command. If you use the save command you should stop using the contextualization scripts to avoid clashes between your saved configuration and the one from context… so execute:

sudo cat /dev/null > /opt/vyatta/etc/config/scripts/vyatta-postconfig-bootup.script

Phew!. It’s been a long post and it’s hard to include all the information without boring you. I hope you have understood how you can use some scripts to add context to your own VyOS image. Soon I’ll post here some more information about VyOS but in the while you can start improving your VyOS images.


OpenNebula Conf 2014: first speakers confirmed!


Hello dear fellows!

Today I’d like to remind you of hurrying up with sending your proposals for the OpenNebula Conf. July 15th will be your last chance to submit your talk and to join us as a speaker on December 2nd – 4th this year in Berlin. The scrimpers of you should also know that July 15th is the last day early bird tickets are on sale.

We already have some confirmed speakers, too. If you have a look at the event website, you can admire the abstracts of the talks of  Armin Deliomini (Runtastic) and Stefan Kooman ( Alberto Zuin ( LTD) will follow soon.

Now ain’t that some good news?

Foreman Integration

Firing up a new virtual machine is smooth, straightforward and often done in just seconds, but to reach this point you have to invest some time and effort in setting up a nice and reliable cloud environment. There are a lot of things to do. You have to prepare some images having a operating system installed, you have to take care about your DNS records, put some SSH Keys into your fresh virtual machine and install some useful software on top of it like an Apache web-server etc. For automating these tasks OpenNebula does provide hooks and contextualization for it.

We at Netways have been using the Foreman in combination with Puppet for doing all these tasks on bare metal systems and now implemented a compute resource functionality for the Foreman project. It can be used to deploy virtual machines within OpenNebula using the Foreman interface which configures DNS,DHCP,PXE,Puppet and so on as well. The functionality is covered by using and extending the ruby fog library.

The pull requests can be found on GitHub:

And a quick demo can be found here (speaker is a little bit dozy and its in german, but you can get an idea of how it works):

The idea is to create a blank (empty datablock image) VM via Foreman in OpenNebula, which then will be fully deployed from scratch. Installation will be done with a PXE-Boot and Kickstart/Preseed installation. Additional software on top like Apache and stuff like that will be installed and configured with Puppet. Everything can be chosen via the Foreman interface which interacts with all infrastructure elements. Also it would be possible to use contextualized prepared images, but we did not implemented it yet.

We are using this feature for some days right now in production and it is really cool. It definitely will and should not replace the Sunstone interface, its just a interaction via the XMLRPC API of OpenNebula.

Every feedback is very welcome and contribution or help for getting it pushed to the master branch of the projects (Foreman, Fog) is of course appreciated. For further information or questions leave your comments below.

OpenNebula LXC Driver Plugin (OneLXC)

Work done by China Mobile in the Big Cloud Elastic Computing System

The aim of this post is to describe a new OpenNebula LXC driver (OneLXC) developed by China Mobile to allow the management of hosts and the deployment of lxc domains in OpenNebula using the LXC hypervisor.


OneLXC mainly consists of two components:

  • IM_MAD: a series of remote scripts that are able to monitor the remote hosts
  • VMM_MAD: a series of remote scripts to manage lxc domains.

OneLXC is very similar with kvm driver because libvirt can support kvm and lxc. They both use virsh command to monitoring hosts and operating virtual machines.

Currently OneLXC supports some simple functions as follows:

  • monitoring host information, for example, cpu and memory
  • deploy/delete/monitoring(poll) lxc domains and their info

Developing Enviroment

  • three host machines with ubuntu 12.04 operating system(amd64
  • opennebula-3.2.1
  • libvirt-0.9.8
  • lxc-0.7.5

How to install and use OneLXC Driver?

To install the OneLXC driver run the “” script provided. This script will copy the necessary files into the OpenNebula installation tree alongside OpenNebula itself.

Driver Configuration

In order to enable the OneLXC driver, it is necessary to modify “oned.conf” accordingly. This is achieved by setting the IM_MAD and VM_MAD options as follows:

# LXC Information Driver Manager Configuration
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of hosts monitored at the same time
IM_MAD = [
name = "im_lxc",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 lxc" ]

VM_MAD = [
name = "vmm_lxc",
executable = "one_vmm_exec",
arguments = "-t 15 -r 0 lxc",
default = "vmm_exec/vmm_exec_lxc.conf",
type = "lxc" ]

The name of the driver needs to be provided at the time of adding a new host to OpenNebula. For example, we can use command “onehost create” and “onehost list” to create and show the hosts.

After adding hosts, we can use onevm create and onevm show <vm_id> to deploy a lxc domain and show its informations, for  example as follows:

Driver files

The OneLXC driver package contains the following files. Note that they are referenced using $ONE_LOCATION as the base directory, therefore meaning a self-contained installation of OpenNebula.

  • $ONE_LOCATION/etc/vmm_exec/vmm_exec_lxc.conf: Configuration file to define the default values for the LXC domain definitions.
  • $ONE_LOCATION/var/remotes/vmm/lxc/: Scripts used to perform the operations on the lxc domains. These files are called “remotes”, meaning they are copied to the remote hosts and executed there.
  • $ONE_LOCATION/var/remotes/im/lxc.d/: Scripts used to fetch information from the remote hosts (memory, cpu use…). These scripts are copied to the remote hosts and executed there.
  • oned.conf: Example OpenNebula configuration file with the LXC drivers enabled.

Source files

  • src/vmm/ The libvirt driver to generate the lxc domain’s deployment configuration file.

Image files

Different from kvm and xen image file, the lxc domain’s image actually is a directory called “rootfs”. For the sake of transfering easily, we compress it, copy it to the target host and decompress it again in the destination. Note: the file permission of  /usr/bin/sudo of lxc domain must be 4755.Otherwise, it can not use root privilege to execute command in lxc domain.

Virtual Machine’s Configuration file

NAME = lxc_2
CPU = 1
MEMORY = 1024
VCPU = 2

DISK = [
source = "/opt/nebula/images/lxc.tar.gz"

Bugs and problems

  • There is a synchronous problem between the “cancel” operation and “” for OneLXC driver.
  • OneLXC driver cann’t implement “reboot”, “shutdown” and “restart” operations because maybe libvirt or LXC doesn’t support.
  • How to generate the lxc “config” file in “rootfs” directory dynamically? Because each lxc domain has different image file path.
  • perhaps there are a lot of bugs but could not find them

Source Code Download

Because lxc driver function is not perfect,I will submit the code later. If someone want it now, you can download the draft version from here: