In the last post we’ve seen the beautiful new face of Sunstone. Even if we are putting lots of effort in the web interface we are also giving some love to the command line interface.

Until now the creation of images and templates from the command line consisted on creating a template file and feeding it to oneimage/onetemplate create command. There still exist that possibility but we can create simple images or VM templates using just command parameters.

For example, registering an image can be done with this command:

$ oneimage create -d default --name ttylinux \
--path http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000003/download
ID: 4

You can also pass a file to the path, but take into account that you need to configure the datastores SAFE_DIRS parameter to make it work.

We can also create image from scratch, for example, a raw image of 512 Mb that will be connected using virtio:

$ oneimage create --name scratch --prefix vd --type datablock --fstype raw \
--size 512m -d default
ID: 5

You can get more information on the parameters issuing oneimage create --help.

Creation of VM templates is also very similar. For example, creating a VM that uses both disks and a network, adding contextualization options and enabling VNC:

$ onetemplate create --name my_vm --cpu 4 --vcpu 4 --memory 16g \
--disk ttylinux,scratch --network network --net_context --vnc
ID: 1
$ onetemplate instantiate my_vm
VM ID: 10

The output of onevm show was also changed to show disks and nics in an easier to read fashion:

$ onevm show 10
VIRTUAL MACHINE 10 INFORMATION
ID : 10
NAME : my_vm-10

[...]

VM DISKS
 ID TARGET IMAGE                               TYPE SAVE SAVE_AS
  0    hda ttylinux                            file   NO       -
  1    vda scratch                             file   NO       -

VM NICS
 ID NETWORK                                IP               MAC VLAN BRIDGE
  0 network                       192.168.0.8 02:00:c0:a8:00:08   no vbr0

VIRTUAL MACHINE TEMPLATE
CONTEXT=[
  DISK_ID="2",
  ETH0_DNS="192.168.0.1",
  ETH0_GATEWAY="192.168.0.1",
  ETH0_IP="192.168.0.8",
  ETH0_MASK="255.255.255.0",
  TARGET="hdb" ]
CPU="4"
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="5910",
  TYPE="vnc" ]
MEMORY="16384"
TEMPLATE_ID="1"
VCPU="4"
VMID="10"

This way you can get useful information about the VM in just a glimpse. If you need more information you can still use -x option or the new --all, this will print all the information in the template a the previous versions.

oneimage show was also changed so you can check which VMs are using an image:

$ oneimage show scratch
IMAGE 5 INFORMATION
ID : 5
NAME : scratch

[...]

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
10 oneadmin oneadmin my_vm-10        pend    0      0K              0d 00h03
[/sourcecode]

This is also true for onevnet show:

[sourcecode language="text" gutter="false"][/sourcecode]
$ onevnet show network
VIRTUAL NETWORK 0 INFORMATION
ID : 0
NAME : network

[...]

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
 9 oneadmin oneadmin template1       pend    0      0K              0d 00h30
10 oneadmin oneadmin my_vm-10        pend    0      0K              0d 00h04

Another nice parameter is --dry. This parameter can be used with onetemplate and oneimage create. It will print the generated template but will not register it. It is useful when you want to create a complex template but don’t want to type it from scratch, just redirect it to a file and edit it to add some features not available from the command line.

One last thing, the parameters for onevm create are the exact same ones as onetemplate create. If you just want to create a fire and forget VM you can use onevm create the same way.

OpenNebula 4.0 will be available for testing, really soon. Until then, we will keep you updated with the new features in posts like this. You can also check the posts released in the last weeks about the Ceph integration, the new scheduling feature, and the new Sunstone.
Stay tuned!

Following the series of posts about the new OpenNebula 4.0 features, now it’s time to take a peek at the brand new Sunstone. OpenNebula 4.0 Sunstone has some beautiful new looks, but it’s not only about the external appearance  there has also been a major boost for the user experience by redefining the users workflow.

In this post we will show a few snapshots of some new Sunstone key features. The new wizard screen eases the task of creating and updating Virtual Machines. There’s also new functionality to update very easily existing resources using the extended info panel.

  • Easily edit your existing resources using the extended info panel.

  • When creating a new Virtual Machine template,  you will be able to filter and select your images with a single click.

  • Select where you want your Virtual Machine to run on.

  • Automatically add contextualization metadata to your Virtual Machines.

OpenNebula 4.0 will be available for testing, really soon. Until then, we will keep you updated with the new features in posts like this. You can also check the posts released last week about the Ceph integration and the new scheduling feature.

Stay tuned!

We are working hard developing new features and overall improvements for OpenNebula 4.0. Before the new Sunstone eye-candy is revealed and steals all the attention, we wanted to share some of the other features, those small pieces that won’t wow anyone, but combined together make OpenNebula the best option when it comes to integration capabilities.

In this post I will focus on the Scheduler related features.

Schedule any action, not only deployments

We have added a '--schedule' option to most of the VM life-cycle commands. Users can schedule one or more VM actions to be executed at a certain date and time, for example:

$ onevm shutdown 0 --schedule "05/25 17:45"
VM 0: shutdown scheduled at 2013-05-25 17:45:00 +0200

$ onevm cancel 0 --schedule "05/25 18:00"
VM 0: cancel scheduled at 2013-05-25 18:00:00 +0200

$ onevm show 0
SCHEDULED ACTIONS
ID ACTION        SCHEDULED         DONE MESSAGE
 0 shutdown    05/25 17:45            -
 1 cancel      05/25 18:00            -

These actions can be edited or deleted updating the VM template (another new feature that you will see in 4.0).

$ onevm update 0

SCHED_ACTION=[
  ACTION="shutdown",
  ID="0",
  TIME="1369496700" ]
SCHED_ACTION=[
  ACTION="cancel",
  ID="1",
  TIME="1369497600" ]

Placement expressions improvements

We have also improved the requirements and rank expressions making them more useful. Until now you could choose to deploy VMs in a certain cluster, using the following expression:

SCHED_REQUIREMENTS = "CLUSTER = production"

But now you can also use variables from the Host’s Cluster template. Let’s say you have the following scenario:

$ onehost list
  ID NAME            CLUSTER   RVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   1 host01          cluster_a   0       0 / 200 (0%)     0K / 3.6G (0%) on
   2 host02          cluster_a   0       0 / 200 (0%)     0K / 3.6G (0%) on
   3 host03          cluster_b   0       0 / 200 (0%)     0K / 3.6G (0%) on    

$ onecluster show cluster_a
CLUSTER TEMPLATE
QOS="GOLD"

$ onecluster show cluster_b
CLUSTER TEMPLATE
QOS="SILVER"

You can now use these expressions:

SCHED_REQUIREMENTS = "QOS = GOLD"

SCHED_REQUIREMENTS = "QOS != GOLD & HYPERVISOR = kvm"

If you use the ruby, java or xml-rpc APIs you will be familiar with the XML representation of the resources. The variable names used in rank and requirements expressions can be child elements of HOST, HOST/HOST_SHARE or HOST/TEMPLATE. From now on, you can also use XPath to have more control over the element used:

SCHED_RANK="/HOST/HOST_SHARE/FREE_CPU"
SCHED_REQUIREMENTS="/HOST/TEMPLATE/CUSTOM_VAR != 5"

Scheduler decisions explained

By default, regular users can’t see the Hosts or open the Scheduler log file, and when their VMs can’t be deployed it may look like something is broken. In the next version, the scheduler will add a message to the VM templates to let users know if there are any problems:

$ onevm show 4
USER TEMPLATE
SCHED_MESSAGE="Wed Feb 20 11:43:55 2013 : No hosts enabled to run VMs"

$ onevm show 5
USER TEMPLATE
SCHED_MESSAGE="Wed Feb 20 11:52:11 2013 : No host meets the SCHED_REQUIREMENTS expression"
SCHED_REQUIREMENTS="FREE_CPU > 100 & FREE_CPU < 50"

Basic affinity

There is a new special variable that provides basic VM affinity functionality. Let’s see a sample use case: we want to create two identical VMs, and they should run on the same Host. In OpenNebula 4.0 you can create VMs directly on hold, so the scheduler won’t deploy them until you decide to release them:

$ onetemplate instantiate 0 --multiple 2 --hold
VM ID: 2
VM ID: 3

Now that we have the IDs, we can update one of the VM’s scheduler requirements.

$ onevm update 3
SCHED_REQUIREMENTS="CURRENT_VMS = 2"

$ onevm release 2,3

The scheduler will deploy first VM 2, and in the following cycle VM 3 in the same Host. This, of course, has some limitations because the scheduler will treat each VM individually, instead of as a group. If you need more advanced VM group management, give a try to the AppFlow component.

As always, we welcome any feedback. In fact, most of these changes are requests made by the community, I invite you to join our mailing lists and our development portal if you haven’t yet.

OpenNebula 4.0 is around the corner and we wanted to give you a sneak peek on one of the upcoming new features: Ceph integration.

The new Ceph datastore driver will be initially available for libvirt/kvm, the support for Xen will come in subsequent versions. Users will be able to use it really easily, by using the standard OpenNebula workflow:

  1. Create a new image in the Ceph datastore,
  2. create a template using that image,
  3. and run the virtual machine!.

This driver uses libvirt to handle RBD devices (Ceph block devices) so simplifying a lot the Datatores and Transfer Manager drivers.

We would like to thank Grzegorz Kocur, Bill Campbell and Vladislav Gorbunov for their community contributions to develop this driver.

More about this in the OpenNebula Ceph Datastore documentation.

This new feature is ready to be tested in the ‘master’ branch.

Give it a try if you want!

This year OpenNebula is actively participating in the world’s leading high-tech event. C12G Labs will have an Exhibition Stand in the Open Source Park (Hall 6, Stand F16, 330), which we would be sharing with Netways, an IT solution provider quite active in enterprise-grade open-source tools.

We invite you to visit us and see the innovations that will come with OpenNebula 4.0. Catch a glimpse of the revamped Sunstone Web interface. A lot of effort is being put in, ensuring a complete facelift of the interface which will enrich the experience of managing and using an OpenNebula cloud. Come and check it out for yourselves!

We will happy to meet up in Hanover and discuss your infrastructure needs. If you are willing to enter the cloud path, we can pave the way for you to ensure a easy and smooth transition from your traditional data center into a cloudy one. Let us know if you want to arrange a meeting, or just drop by the exhibition stand.

See you all at CeBIT 2013!

I am an engineer working for the european project BonFIRE, which :

Give researchers access to large-scale virtualised compute, storage and networking resources with the necessary control and monitoring services for detailed experimentation of their systems and applications.

In more technical words, it provides access to a set of testbeds behind a common API as well as a set of tools to monitor cloud infrastructure at both VM and Hypervisor levels, enabling experimenters to diagnose cross-experiment effects on the infrastructure.

Some of these testbeds are running OpenNebula ( 3.6 with some patches for BonFIRE ). As each testbeds has it’s own “administration domain”, the setup are different between sites. The way we use OpenNebula is:

  • we have some default images, used for almost all VMs
  • defaults images are not updated often
  • users can save their own images, but it does not happen often

The update of the BonFIRE software stack to last version was accompanied at our laboratory by an upgrade of our testbed. After some study / calculations, we bought this hardware:

  • 1 server with 2+8 disks (RAID1 for system, RAID 5 on 8 SAS 10k 600G disks), 6 cores and 48GB of RAM, 2 cards of 4 Gbps ports
  • 4 servers with 2 drives, 2 * 6 cores (total of 24 threads), 64GB of ram, 2 Gbps ports

We previously had 8 small worker nodes (4G of RAM) that was configured to use a LVM with a cache feature (snapshot based) to improve OpenNebula performances.

With this new hardware, the snapshot feature can’t be used anymore, as it has disastrous performance when you have more than a few snapshots on same source LV.

On the other side, everyone reading this blog probably knows that having some dozens of VMs volumes on shared (NFS) storage require a really strong backend with really good performances to achieve acceptable performances.

The idea of our setup is to bring NFS and local storage to work together, providing :

  • better management of copy of images through network (ssh has a huge performance impact)
  • good VM performance as their image is copied from NFS to local before being used.

Just before explaining the whole setup, let’s show some raw performances data (yes, I know, this is not a really relevant benchmark, It’s just to give some idea of hardware capacity).

  • Server has write performance (dd if=/dev/zero of=/dev/vg/somelv conv=fdatasync) > 700MB/s and read > 1GB/s
  • Workers have 2 disks added as 2 PVs, and LVs are stripped, performances are > 300MB/s for sync write, ~ 400MB/s in read.

As we use virtualisation a lot to host services, ONE frontend is installed in a virtual machine itself. Our previous install was on CentOS 5, but to get more recent kernel/virtio drivers I installed it on a CentOS 6.

To reduce network connections and improve disk perfs, the VM is hosted on the disk server (this is almost the only VM).

The default setup (ssh + lvm) didn’t have good performances, mostly due to the cost of ssh encryption. So I then switched to netcat, that was much better (almost at max Gb link speed) but has at least 2 drawbacks :

  • Doesn’t efficiently manage cache (no cache on client, only FS cache on server, so almost no cache between 2 copies)
  • Needs to setup a netcat to listen on worker for each copy

So I finally setup NFS. To avoid extra I/O between server/network and VM, I put the NFS server on the hypervisor itself, and mounted it on OpenNebula frontend. Advantage of NFS is that it handles cache on both client and server pretty well (for static content at least). That way, we have a good solution to:

  • Copy images when they are created on frontend (they are just copied on the NFS mounted (synchronously) from hypervisor)
  • Copy images from NFS to workers (a dd from NFS mount to local LV) are good, and may benefit of client cache when the same image is copied many times (remember, we use mostly the same set of 5/6 source images)

So, let’s assume we have a good copy solution, one bottleneck is the network. To avoid/limit this, I aggregate Gb links between NFS server and our switch (and between each worker node and our switch) to have a 4Gbps capacity between NFS server and switch. Moreover, as the load-balancing algorithm used to decide which link is used for a given transaction (a same transaction can’t go through more than 1 link), I computed IP addresses of each worker node so that each one use a given, unique link (it does not mean this given link won’t be used for other things / for other workers, but it ensure that when you try to copy on the 4 nodes at same time, network is optimally used, 1Gb link per transaction).

Also, in order to reduce the useless network transaction, I updated the TM drivers to :

  • handle copy from nfs to lv
  • don’t use ssh/scp when possible, just do a cp on the NFS (for example for saving vm, generating context, etc …)

We first wish to put the NFS read-only on workers, but it requires to do a scp when saving a VM, which isn’t optimal. So a few things are still written from worker to NFS:

  • deployment file (as is is sent over ssh on a ‘cat > file’)
  • saved images

This way, we :

  • have a efficient copy of images to workers (no ssh tunneling)
  • may have significant improve thanks to NFS cache
  • don’t suffer of concurrent write access to NFS because VMs are booted on a local copy

Some quick benchmark  to finish this post :

  • From first submission (no cache, nfs share just mounted) to 100VM running (ssh-able) : < 4min
  • From first submission (no cache, nfs share just mounted) to 200VM running (ssh-able) : > 8min

In the simultaenous deployment of large number of VMs, OpenNebula reveals some bottlenecks as monitoring of already-running vms slow down the deployment of new ones. In more details, when deploying a large number of vms, it might happen that the monitorization threads interfere with the deployment of new vms. This is because OpenNebula enforces a single vmm task per host simultaneously because in general hypervisors don’t support multiple concurrent operations robustly.

In our particular 200 vms deployment we noticed this effect, where the deployment of new vms was slowed down because of the monitorization of already running vms. We did the test with the default values for monitoring interval, but,to mitigate this issue, OpenNebula offers the possibility to adjust the  monitorization and scheduling times as well as tuning the number of vms sent per cycle.

Additionally there’s a current effort to optimize the monitorization strategy to remove this effect, to: (i) move the VM monitoring to the Information system and so prevent the overlap with VM control operations and (ii) obtain the information of all running VMs in a single operations as currently implemented when using Ganglia (see tracker 1739 for more details).

After receiving several requests from our users to sponsor some particular features, the OpenNebula project is happy to announce the creation of a software development program where organizations will be able to fund the development of new features. When we define the roadmap for a new OpeNebula release we listen to all users, trying to prioritise the features demanded by the organisations supporting the open-source project with a commercial subscription. However we cannot guarantee a time frame for their development. The Fund a Feature Program can be used to implement within a given time frame new functionality or enhancements in the code, new or enhanced drivers, or new integrations with existing management, billing and other OAM&P systems.

The development of new features occur in the public repository of OpenNebula, and the new code undergoes the testing, continuous integration, and QA processes of OpenNebula before its incorporation into the main OpenNebula distribution. The new code and documentation will publicly acknowledge your funding support, and the OpenNebula web site will include your name on the list of featured contributors.

Funding a feature not only gets you the feature you need faster, but allows you to contribute to the open source project from which you derive so much value.

Thanks a lot for your support!

Over the last five years, since the release of the first open-source version of OpenNebula in March 2008, we have been involved in many presentations, discussions and meetings where people wanted to know how OpenNebula compares with the rest of open-source Cloud Management Platforms (CMPs), mostly with Eucalyptus and OpenStack. The most common understanding is that all CMPs are competing in the same market, trying to fill the same gap. Consequently, people jump to the wrong conclusion that after years of a fierce competition, there will only be one winner, a single open-source CMP in the market. However, as discussed by Joe Brockmeier in his post “It’s Not Highlander, There Can Be More Than One Open Source Cloud”, there is room in the market for several open-source CMPs that, addressing different cloud niches, will fit together into a broad open cloud ecosystem.

We have prepared this article to briefly describe our experience about the different types of cloud models, and our view about how the main open-source CMPs are targeting their needs. Do not expect a table comparing side-by-side the size of the communities, technical features of the different tools, or the management structure of the projects. We have tried to focus only on their general approaches, on their overall position in the cloud market, and, of course we have tried to be as neutral as possible.

Two Different Cloud Models

Although there are as many ways to understand cloud computing as there are organizations planning to build a cloud, they mostly fall between two extreme cloud models:

  • Datacenter Virtualization: On one side, there are businesses that understand cloud as an extension of virtualization in the datacenter; hence looking for a vCloud-like infrastructure automation tool to orchestrate and simplify the management of the virtualized resources.
  • Infrastructure Provision: On the other side, there are businesses that understand cloud as an AWS-like cloud on-premise; hence looking for a provisioning tool to supply virtualized resources on-demand.

Yes, we know, we said that we wanted to focus only on open-source CMPs. However we have intentionally used two of the principal cloud “products”, VMware vCloud and AWS, a proprietary CMP and a cloud service, because they are the most well known implementations of both models. We will go even a step ahead and claim that most of the users in the first cloud model explicitly express their willingness to find an open alternative to vCloud because it is too expensive, because they want to avoid vendor lock-in, or because it cannot be adapted to meet their needs. Equally, users in the second cloud model explicitly mention Amazon as the type of cloud they want to build internally.

The following table describes the main characteristics of both types of clouds. This is not an exhaustive list, I’m just putting it here to illustrate some of the differences between both philosophies.

Datacenter Virtualization Infrastructure Provision
Applications Multi-tiered applications defined in a traditional, “enterprise” way “Re-architected” applications to fit into the cloud paradigm
Interfaces Feature-rich API and administration portal Simple cloud APIs and self-service portal
Management Capabilities Complete life-cycle management of virtual and physical resources Simplified life-cycle management of virtual resources with abstraction of underlying infrastructure
Cloud Deployment Mostly private Mostly public
Internal Design Bottom-up design dictated by the management of datacenter complexity Top-down design dictated by the efficient implementation of cloud interfaces
Enterprise Capabilities High availability, fault tolerance, replication, scheduling… provided by the cloud management platform Most of them built into the application, as in “design for failure”
Datacenter Integration Easy to adapt to fit into any existing infrastructure environment to leverage IT investments Built on new, homogeneous commodity infrastructure

This classification of existing cloud models is not new. The “Datacenter Virtualization” and “Infrastructure Provider” cloud models have received different names by different authors: “Enterprise Cloud” and “Next Generation Cloud” by many analysts, “Cloud-in” and “Cloud-out” by Lydia Leong, “Enterprise Cloud “and “Open Cloud” by Randy Bias, “Enterprise Cloud” and “Private Cloud” by Simon Wardley, “Private Cloud” and “Public Cloud” by Matt Asay, and “Policy-based Clouds” and “Design for fail Clouds” by Massimo Re Ferrè, who also categorized these models as “Design Infrastructure to support Applications” versus “Design Applications that leverage Infrastructures”.

Two Different Flavors of Cloud Management Platforms

Existing open-source CMPs can be placed somewhere in between both models. We have created a chart, the CMP Quadrant, aiming to aid corporations to better understand the present and future landscape of the cloud market. One of the dimensions is the “Cloud Model” and the second one represents “Flexibility” in terms of the capabilities of the product to adapt to datacenter services and to be customized to provide a differentiated cloud service. This dimension captures the grade of adaptability of the product, and goes from low to high. Finally, we have placed in the chart the main open-source players in the cloud ecosystem: Eucalyptus, CloudStack, OpenStack and OpenNebula… or at least those tools that are commonly compared to OpenNebula by our users and customers.

Some important clarifications,

  • We are not suggesting that one position (read “tool”) in the chart is better than other, only that some of the CMPs are so different that cannot be compared, they are on completely different tracks (read “zones in the Quadrant”).
  • The chart does not represent absolute values, the relevant information is in the relative positions of the CMPs with respect to their “Cloud Model” and “Flexibility”.
  • The openness of the software is orthogonal to this chart, you can also use it to compare proprietary CMPs.
  • Any CMP can be used to build either public or private clouds, all of the CMPs in the Quadrant implement cloud APIs.
  • And last, but not least, this map is not static, the different CMPs will move right, left, up or down over time, but they cannot be simultaneously in different places. There is not a single perfect solution for every possible scenario.

Comparing vCloud to AWS or comparing vCloud to OpenStack is like comparing apples to oranges as it has been clearly expressed by Massimo Re Ferrè and Boris Renski respectively. Both are fruits but with very different flavor. That being said, it is clear that since all the tools enable infrastructure cloud computing, there is always some overlap in the features that they provide. This overlap tends to be larger for those tools that are closer on the “Cloud Model” axis.

There are fundamental differences in philosophy and target market between OpenNebula and Eucalyptus They are in opposite zones of the Quadrant servicing different needs and implementing completely different philosophies. I would say that they represent the open-source incarnations closer to vCloud and AWS respectively. In the same way many companies compare OpenNebula with OpenStack because both represent flexible solutions that can be adapted to their needs, but wrongly think that both enable the same type of cloud. It is also clear that Eucalyptus and OpenStack meet the same need and so compete for the same type of cloud.

Looking to the Future

In OpenNebula, we do not think that one cloud model will dominate over the other. They may converge at the very long term, but not before 10 years. Consequently, and because a single CMP can not be all things to all people, we will see an open-source cloud space with several offerings focused on different environments and/or industries. This will be the natural evolution, the same happened in other markets. The four open-source CMPs will coexist and, in some cases, work together in a broad open cloud ecosystem.

To a certain extent, this collaboration has started yet. Some of our users have reported experiences using OpenNebula with other cloud platforms:

  • Some corporations are mixing an Enterprise Cloud with an in-house Cloud Service. They are implementing a cloudbursting architecture where an OpenNebula enterprise cloud bursts to an OpenStack- or Eucalyptus-based cloud when the demand for computing capacity spikes.
  • Other corporations are using components from different projects to build their cloud. The integration capabilities of OpenNebula are allowing its integration with OpenStack Swift or OpenStack Quantum for object/block store and networking management respectively in the data center.

We are sure that in the short term we will see some of the open-source CMPs working together, while at the same time finding ways to differentiate themselves in their own cloud markets.

OpenNebula 4.0 is getting prepared, the team is finishing the new shiny features and the beta release is just a few weeks away. Here’s our monthly newsletter with the main news from the last month, including what you can expect in the coming months.

Technology

A very active month for OpenNebula, with several big news.

A major change has occurred in the OpenNebula release process, with C12G announcing that every OpenNebula maintenance release and service pack will be made publicly available at the community site. From release 3.8 onwards, the OpenNebula community will enjoy the benefits of the OpenNebulaPro distribution including the C12G’s Quality Assurance processes. Which is really good news! The OpenNebula distribution will benefit from a more up to date, quality software.

The Team is focused on the upcoming 4.0 release. An important number of bugs are being wrinkled out, and several and relevant features are being worked upon. You can follow the progress from the development portal.

In a nutshell, the upcoming OpenNebula 4.0 will come with a revamped Sunstone interface (stay tuned for this! screenshots may leak shortly in your favorite cloud management platform twitter account and in this blog), core enhancement with audit trails, additions to virtual machine lifecycles (like the “hold” state), the ability to create disk snapshots, which comes in very handy for the day to day service management, as well as support for RBD block devices.

A groundbreaking milestone has been reached with the open-source release of the OpenNebulaApps, a suite of tools for users and administrators of OpenNebula clouds to simplify and optimize multi-tiered application management. The new software has been released under Apache license and will be incorporated into the main distribution of OpenNebula, bringing state-of-the art service management (among other nice features) to the OpenNebula community.

It is also worth to highlight the apparition of the OpenNebula cloud OS architecture in IEEE Computer, with description of its different components, discussing the different approaches for cloud federation

Community

There have been a number of community contributions to OpenNebula during this last month. A very valuable contribution was made by Nicolas Agius as a new ecosystem component, the Clustered Xen Manager (CXM) drivers for OpenNebula. These TM and VMM drivers allow the use of cLVM datastores on a pool of XEN hypervisors. It also brings high-availability and load-balancing to the hosted VM using the CXM.

Another outstanding contribution by Ricardo Duarte is the econe metadata server, which enables VM contextualization for OpenNebula clouds the same way it is done in the Amazon EC2 environment.

Moreover, support for OpenNebula in rexify (a popular server configuration management tool) has been added, enabling virtual machine deployment and contextualization using rexify in OpenNebula clouds.

We would like to thank the numerous people that provided feedback, either through the development portal or the user mailing list, with bug reporting, patches for bug fixing and the intense testing. OpenNebula community, healthy as ever!

Outreach

A relevant post has been written in GigaOM about the open-source release of the OpenNebula Apps components with Apache license.

During the following months, members of the OpenNebula team or people deeply familiar with the technology will be speaking in the following events:

  • FOSDEM 2013, Brussels, Belgium, February 2 and 3, 2013
  • CeBIT, Hanover, Germany, March 5-9, 2013
  • FlossUK, Managing Enterprise Clouds with OpenNebula, March 20 and 21, 2013

If you will be attending any of these events and want to meet with a member of the OpenNebula team, drop us a line at contact@opennebula.org. Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.