OpenNebula is being used by many leading supercomputing (SARA, CESGA, CESCA, PDC-KTH, PIC…) and research centers (ESA, CERN, FermiLab, CSIRO, KIT, Harvard SEAS…) to build HPC and science clouds for hosting virtualized computational environments, such as batch farms and computing clusters, and for providing users with new “HPC as a service” resource provisioning models. One of our recent invited talks in this field, at ISC Cloud Computing 2011, illustrates the benefits of using OpenNebula both as an infrastructure tool, to build private clouds, and as an provisioning tool, to build public clouds.
Here we try to summarize the main requirements that we have received from these organizations building clouds for HPC environments, and the functionalities that make OpenNebula unique to fulfill these requirements:
- Support for Automatic and Elastic Management of the Computing Service. The powerful CLI and APIs exposed by OpenNebula enable its easy integration with any of the most common job management systems (Torque, Open Grid Engine, Platform LSF…) to automatically provision computing worker nodes to meet dynamic demands. The advanced contextualization mechanisms enable the automatic configuration of the worker nodes.
- Combination of Physical and Virtual Resources. You can use OpenNebula for the management of the virtual worker nodes in your computer cluster and keep physical resources for HPC performance-sensitive applications that require high-bandwidth, low-latency interconnection networks.
- Management of Several Physical Clusters with Different Configurations. The multiple-zone functionality enables the management of multiple physical clusters with specific architecture and software/hardware execution environments to fulfill the needs from different workload profiles.
- Support for Several VOs. The new functionality to on-demand provision of Virtual Data Centers can be used to provide different VOs with isolated compartments of the cloud infrastructure.
- Support for Heterogeneous Execution Environments. The new repositories for VM appliances and templates can be used to provide users with pre-defined application environments. The fined-grain access control supports the creation and easy maintenance of the appliance repositories, that could be even private for different Virtual Data Centers (VOs).
- Full Isolation of Execution for Performance-sensitive Applications: The functionality for automatic placement of VMs and the configurable monitoring system enable the ability to define isolation levels for the computing services. The new multiple-zone support extends this functionality to easily manage fully isolated physical clusters.
- Execution of Complete Computing Clusters: You can deploy multi-tier services consisting of groups of inter-connected VMs and define their auto-configuration at boot time.
- Cloudbursting to Meet Peak Demands: The hybrid cloud functionality enables the deployment of architectures for cloudbursting to address peak or fluctuating demands of HTC (High Throughput Computing) workloads.
- Management of Persistent Scientific Data: You can make disks persistent, save changes for subsequent new executions, and share the new disks with other users in your Virtual Data Center (VO).
- Placement of VMs Near the Input Data: OpenNebula’s scheduler provides automatic VM placement for the definition of workload and resource-aware allocation policies such as packing, striping, load-aware, or affinity-aware.
- Ensure that each Tenant Gets a Fair Share of Resources: OpenNebula’s resource quota management helps allocate, track and limit resource utilization.
OpenNebula 3.0 brings many other new features to build cutting-edge cloud infrastructures. OpenNebula 3.0 is a fully open-source technology. You have the software, the guides and our support to deploy your cloud infrastructure for High Performance Computing.