Planning the installation 1.4

Overview

OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of cluster nodes where Virtual Machines will be executed. There is at least one physical network joining all the cluster nodes with the front-end.

 high level architecture of cluster, its  components and relationship

The basic components of an OpenNebula system are:

  • Front-end, executes the OpenNebula and cluster services.
  • Nodes, hypervisor-enabled hosts that provides the resources needed by the Virtual Machines.
  • Image repository, any storage medium that holds the base images of the VMs.
  • OpenNebula daemon, is the core service of the system. It manages the life-cycle of the VMs and orchestrates the cluster subsystems (network, storage and hypervisors)
  • Drivers, programs used by the core to interface with an specific cluster subsystem, e.g. a given hypervisor or storage file system.
  • oneadmin, is the administrator of the private cloud that performs any operation on the VMs, virtual networks, nodes or users.
  • Users, use the OpenNebula facilities to create and manage their own virtual machines and virtual networks.

System Requirements

Cluster Front-End

This section details the software that you need to install in the front-end to run and build OpenNebula. The front-end will access the image repository that should be big enough to store the VM images for your private cloud. Usually, these are master images that are cloned (copied) when you start the VM. So you need to plan your storage requirements depending on the number of VMs you'll run in your virtual infrastructure (see the section below for more information). The base installation of OpenNebula only takes 10MB.

Installation Mode

OpenNebula can be installed in two modes:

  • system-wide: binaries, log files and configuration files will be placed in standard UNIX location under the root file-system. You will need root access to perform this installation.
  • self-contained: the OpenNebula distribution is placed in a self-contained location. This is the preferred installation mode.

In either case, you do not need the root account to run the OpenNebula services.

Software Packages

This machine will act as the OpenNebula server and therefore needs to have installed the following software:

  • ruby >= 1.8.6 and < 1.9.0
  • sqlite3 >= 3.5.2
  • xmlrpc-c >= 1.06
  • openssl >= 0.9
  • ssh

Additionally, to build OpenNebula from source you neeed:

  • Development versions of the sqlite3, xmlrpc-c and openssl packages, if your distribution does not install them with the libraries.
  • scons >= 0.97
  • g++ >= 4
  • flex >= 2.5 (optional, only needed to rebuild the parsers)
  • bison >= 2.3 (optional, only needed to rebuild the parsers)

Optional Packages

:!: These packages are not needed to run or build OpenNebula. They improve the performance of the user-land libraries and tools of OpenNebula, nor the core system. You will probably experiment a more responsive CLI
First install rubygems and ruby development libraries:

  • ruby-dev
  • rubygems
  • rake
  • make

Then install the following packages:

  • ruby xmlparser, some distributions include a binary package for this (libxml-parser-ruby1.8). If it is not avilable in your distribution install expat libraries with its development files and install xmlparser using gem:

<xterm>

# gem install xmlparser --no-ri --no-rdoc</xterm> 

:!: Note the extra parameters to gem install. Some versions of xmlparser have problems building the documentation and we can use it without documentation installed.
:!: Bear in mind that in some Linux flavors you will need to install the “expat-devel” package to be able to install the xmlparser.

  • ruby nokogiri, to install this gem you will need libxml2 and libxslt libraries and their development versions. The we can install nokogiri library:

<xterm> # gem install nokogiri</xterm>

Cluster Node

The nodes will run the VMs and does not need any additional storage requirement, see the storage section for more details.

Software Packages

These are the requirements in the cluster nodes that will run the VM's:

  • ssh server running
  • hypervisor working properly configured
  • ruby >= 1.8.5

Preparing the Cluster

This section guides you through steps needed to prepare your cluster to run a private cloud.

Storage

:!: In this guide we assume that the image repository and the OpenNebula var directory can be access in the cluster nodes by a Shared FS of any kind. In this way you can take full advantage of the hypervisor capabilities (i.e. live migration) and OpenNebula Storage module (i.e. no need to always clone images).

:!: OpenNebula can work without a Shared FS, this will make you to always clone the images and you will only be able to do cold migrations. However this non-shared configuration does not impose any significant storage requirements. If you want to use this configuration check the Storage Customization guide and skip this section.

The cluster front-end will export the image repository and the OpenNebula installation directory to the cluster nodes. The size of the image repository depends on the number of images (and size) you want to store. Also when you start a VM you will be usually cloning (copying) it, so you must be sure that there is enough space to store all the running VM images.

 Storage Model : NFS

Create the following hierarchy in the front-end root file system:

  • /srv/cloud/one, will hold the OpenNebula installation and the clones for the running VMs
  • /srv/cloud/images, will hold the master images and the repository

<xterm> $ tree /srv /srv/

`– cloud

  |-- one
  `-- images

</xterm>

Example: A 64 core cluster will typically run around 80VMs, each VM will require an average of 10GB of disk space. So you will need ~800GB for /srv/cloud/one, you will also want to store 10-15 master images so ~200GB for /srv/cloud/images. A 1TB /srv/cloud will be enough for this example setup.

Export /srv/cloud to all the cluster nodes. For example, if you have all your physical nodes in a local network with address 192.168.0.0/24 you will need to add to your a line like this:

<xterm> $ cat /etc/exports /srv/cloud 192.168.0.0/255.255.255.0(rw) </xterm>

In each cluster node create /srv/cloud and mount this directory from the front-end.

User Account

The Virtual Infrastructure is administered by the oneadmin account, this account will be used to run the OpenNebula services and to do regular administration and maintenance tasks.

:!: OpenNebula supports multiple users to create and manage virtual machines and networks. You will create these users later when configuring OpenNebula.

Follow these steps:

  • Create the cloud group where OpenNebula administrator user will be:<xterm>

# groupadd cloud </xterm>

  • Create the OpenNebula administrative account (oneadmin), we will use OpenNebula directory as the home directory for this user:<xterm>

# useradd -d /srv/cloud/one -g cloud -m oneadmin </xterm>

  • Get the user and group id of the OpenNebula administrative account. This id will be used later to create users in the cluster nodes with the same id:<xterm>

$ id oneadmin uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud) </xterm> In this case the user id will be 1001 and group also 1001.

  • Create the group account also on every node that run VMs. Make sure that its id is the same as in the frontend. In this example 1001:<xterm>

# groupadd –gid 1001 cloud # useradd –uid 1001 -g cloud -d /srv/cloud/one oneadmin </xterm>

:!: You can use any other method to make a common cloud group and oneadmin account in the nodes, for example NIS.

Network

There are no special requirements for networking, apart from those derived from the previous configuration steps. However to make an effective use of your VM deployments you'll probably need to make one or more physical networks accessible to them.

This is achieved with ethernet bridging, there are several guides that explain how to deal with this configuration, check for example the networking howto in the XEN documentation.

For example, a typical cluster node with two physical networks one for public IP addresses (attached to eth0 NIC) and the other for private virtual LANs (NIC eth1) should have two bridges: <xterm> $ brctl show bridge name bridge id STP enabled interfaces vbr0 8000.001e682f02ac no eth0 vbr1 8000.001e682f02ad no eth1 </xterm>

Please for more details on using virtual networks check the Virtual Network Usage Guide and the Networking Customization Guide

Secure Shell Access

You need to create ssh keys for oneadmin user and configure machines so it can connect to them using ssh without need for a password.

  • Generate oneadmin ssh keys:<xterm>

$ ssh-keygen </xterm> When prompted for password press enter so the private key is not encripted.

  • Copy public key to ~/.ssh/authorized_keys to let oneadmin user log without the need to type password. Do that also for the frontend:<xterm>

$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys </xterm>

  • Tell ssh client to not ask before adding hosts to known_hosts file. This goes into ~/.ssh/config:<xterm>

$ cat ~/.ssh/config Host *

  StrictHostKeyChecking no

</xterm>

:!: Check that the sshd daemon is running in the cluster nodes. oneadmin must able to log in the cluster nodes without being promt for a password. Also remove any Banner option from the sshd_config file in the cluster nodes.

Hypervisor

The virtualization technology installed in your cluster nodes, have to be configured so the oneadmin user can start, control and monitor VMs. This usually means the execution of commands with root privileges or making oneadmin part of a given group. Please take a look to the virtualization guide that fits your site:

Planning Checklist

If you have followed the previous steps your cluster should be ready to install and configure OpenNebula. You may want to print the following checklist to check your plan and proceed with the installation and configuration steps.

Software Requirements
ACTION DONE/COMMENTS
Installation type: self-contained, system-wide self-contained
Installation directory /srv/cloud/one
OpenNebula software downloaded to /srv/cloud/one/SRC
sqlite, g++, scons, ruby and software requirements installed
User Accounts
ACTION DONE/COMMENTS
oneadmin account and cloud group ready in the nodes and front-end
Storage Checklist
ACTION DONE/COMMENTS
/srv/cloud structure created in the front-end
/srv/cloud exported and accessible from the cluster nodes
mount point of /srv/cloud in the nodes if different VMDIR=<mount_point>/var/
Cluster nodes Checklist
ACTION DONE/COMMENTS
hostnames of cluster nodes
ruby, sshd installed in the nodes
oneadmin can ssh the nodes paswordless