One of the main design principles in OpenNebula focuses on enabling large scale deployments. In this type of deployment it is usually the case that we have to deal with large number of physical hosts, with the intention of running a large number of virtual machines. This is important since many of the OpenNebula users run large scale deployments with tens of thousands of virtual machines.
The scalability of the virtual infrastructure manager is, without a doubt, a keystone when a large-scale cloud deployment is at stake. The ability to handle a large number of resources, keeping track of them and staying responsive is essential, for that very reason the OpenNebula project has put a lot of effort to make the central component of OpenNebula, the core daemon, as stable and robust a possible. This was largely possible thanks to the invaluable feedback provided by the community, so kudos to you! But aside from the scalability, there are many other aspects in which OpenNebula can contribute, with features specifically though to handle a large number of resources:
- Clusters. These logical entities can be defined as pools of physical hosts that share datastores and virtual networks. Clusters are used for load balancing, high availability, and high performance computing. The idea is to group a set of physical hosts which are homogeneous enough to be able to pull images from the same server (ie, they share a datastore), and to use the same virtual networks, that is, they have the same physical network configuration, being that they share the same bridging configuration or that they have access to the same Open vSwitch for instance. Its benefits with respect to large scale deployments include the ability to deliver a particular virtual machine to the right hardware, which can get tricky when the number of physical resources increase (eg “I want this VM to run over a host with the best network connection available), and the possibility to load balance I/O operations across several datastores.
- Virtual Data Centers. Fully-isolated virtual infrastructure environments where a group of users, under the control of the Virtual Data Center (VDC) administrator, can create and manage compute, storage and networking capacity. The VDC administrator can create new users inside the VDC. Both admins and users access the VDC through a reverse proxy, so they don’t need to know the endpoint of the OpenNebula cloud, but rather the address of the oZones server and the VDC where they belong to. This feature can be used in large scale deployments to achieve multi-tenancy, effectively partitioning a large cloud in smaller parts ready to be delivered to different groups or organizations.
- Hybrid Cloud. This extension of a private cloud allows the combination of local resources with resources from remote Cloud providers, done transparently through OpenNebula. The remote provider could be a commercial Cloud service, such as Amazon EC2, or a partner infrastructure running a different OpenNebula instance. Such support for cloudbursting enables highly scalable hosting environments, since the peak demands that cannot be satisfied by local resources are outsourced to external providers.
- OpenNebula Zones. A Zone can be seen essentially as an OpenNebula instance, that is, a group of interconnected physical hosts with hypervisors controlled by OpenNebula. A Zone can be added to the oZones server, which provides a centralized way to manage multiple OpenNebula deployments. In this way, the oZones server presents a list of aggregated resources, allowing for a loose federation of several Clouds, adding an order of magnitude in the scale of Cloud infrastructures that can be managed with OpenNebula technology.