This QuickStart guide aims to show how to prepare a simple cluster consisting of two physical hosts with a shared file system to install ONE, and how to configure ONE to manage VMs in the aforementioned cluster. As seen in the following picture, the frontend machine is where ONE is installed. ONE supports XEN and KVM hypervisors (and it can even interface with Amazon's EC2) on the hosts to manage the VM lifecycle across them. Non shared storage is also supported but this guide is focused on the XEN hypervisor and having shared storage for images to run.
We are going to use a cluster formed by three computers:
NIS
and NFS
server. Let's call it makito and assume it has an IP of 192.168.3.1.
The software requisites to install ONE in this cluster can be found here. We assume in this guide that XEN
, NFS
and NIS
are correctly configured. Additionally you will need the ONE tarball, which can be downloaded from the Software section.
We are going to set up a common user and group for the three machines, one of the easiest ways is to use NIS. Assuming makito is the NIS server, log on to it and type:
makito$ groupadd xen makito$ useradd -G xen oneadmin makito$ cd /var/yp makito$ make
We need to know the group id of the created xen
group. Now it ought to be between the GIDs to which <oneadmin>
pertains
makito$ id oneadmin
From the output of the previous command, get the GID of the xen
group.
Now we have to create local groups (let's call them rootxen
group) in aquila01 and aquila02 that includes their local root and share GID with the xen
group, so the local group shares the xen
group privileges.
aquila01$ echo "rootxen:x:<xen_gid>:root >> /etc/group
change in the previous command <xen_gid>
for the corresponding number. Repeat this for aquila02.
The ONE server, here also de NFS server, is going to export two folders:
/home
folder : Sharing this folder is useful to configure the ssh setup, and also to provide a /home
folder for <oneadmin>
. This step is optional./opt/nebula
folder : Here we are going to place the ONE installation and the VM images.
First thing to do, log into the ONE server and copy into /etc/exports
the following lines:
/home 192.168.3.0/255.255.255.0(rw,async,no_subtree_check) /opt/nebula 192.168.3.0/255.255.255.0(rw,async,no_subtree_check)
Log into aquila01 and add the following to the /etc/fstab
:
makito:/home /home nfs soft,intr,rsize=32768,wsize=32768,rw 0 0 makito:/opt/nebula /opt/nebula nfs soft,intr,rsize=32768,wsize=32768,rw 0 0
Repeat the above for aquila02.
The <oneadmin>
account has to be trusted in the nodes from the ONE server, being able to log into them in a passwordless fashion. Let's do the trick. Logged in as <oneadmin>
in makito:
makito$ ssh-keygen
Press enter when prompted for a password. As we now have shared homes the following will be enough to achieve the ssh configuration in all the nodes:
makito$ cd ~/.ssh makito$ cat id_rsa.pub >> authorized_keys
You can now try to ssh with the <oneadmin>
account from makito to one of the nodes, you should gain a login session without having to type a password.
The folder that will hold the images of the Virtual Machines has to be shared and there are special requirements with regard to permissions. Let's create the image folder:
makito$ mkdir /opt/nebula/images
Both the images and the folders has to:
xen
xen
Let's assume we have an image called disk.img, it needs to be placed in that folder with permissions like the following file:
makito$ ls -lrta /opt/nebula/images/disk.img -rw-rw-r-- 1 oneadmin xen 4294967296 2008-03-26 15:56 /opt/nebula/images/disk.img
As the oneadmin in makito download the ONE tarball and untar it in the home folder. Change to the recently created folder and type:
makito$ scons
If there are any problems in the compilation, maybe this helps. Once the compilation finishes successfully, lets install it to the target folder:
makito$ ./install.sh /opt/nebula/ONE
Now lets set the environment:
makito$ export ONE_LOCATION=/opt/nebula/ONE/ makito$ export ONE_XMLRPC=http://localhost:2633/RPC2 makito$ export PATH=$ONE_LOCATION/bin:$PATH
Now is time to start the ONE daemon and the scheduler. So don't get nervous and type in makito:
makito$ $ONE_LOCATION/bin/one start
If you get a “oned and scheduler started” message correctly, your ONE installation is up&runnin'.
Let's set up the cluster in ONE. First thing is adding hosts to ONE. This can be done by means of the onehost
command (See the Command Line Interface for more information). So let's add both aquila01 and aquila02:
makito$ onehost add aquila01 im_xen vmm_xen makito$ onehost add aquila02 im_xen vmm_xen
We are giving ONE hints about what it needs in order to run VMs in those both hosts.
Let's do a sample session to make sure everything is working. First thing to do, check that the adding of the cluster hosts went smoothly. Issue the following command as <oneadmin>
and check the output:
makito$ onehost list HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT 0 aquila01 0 800 800 800 8194468 7867604 on 1 aquila02 0 800 797 800 8387584 1438720 on
Once we have checked the nodes, we can then submit a VM to ONE, by using onevm
. We are going to build a VM template to submit the image we had placed in the /opt/nebula/images
directory. The following will do:
NAME = vm-example CPU = 0.5 MEMORY = 128 OS = [ kernel = "/boot/vmlinuz-2.6.18-4-xen-amd64", initrd = "/boot/initrd.img-2.6.18-4-xen-amd64", root = "sda1" ] DISK = [ source = "/opt/nebula/images/disk.img", target = "sda1", readonly = "no" ] NIC = [ mac="00:ff:72:17:20:27" ]
Save it in your home and name it myfirstVM.template.
You can add more parameters, check this for a complete list. Also, you can add more DISKs if you need, say, a swap partition.
Once we have tailored the requirements to our needs (specially, CPU and MEMORY fields), ensuring that the VM fits into at least one of both hosts, lets submit the VM (assuming you are currently in your home folder):
makito$ onevm submit myfirstVM.template
This should come back with an ID, that we can use to identify the VM for monitoring and controlling, again through the use of the onevm
command:
$> onevm list
The output should look like:
ID NAME STAT CPU MEM HOSTNAME TIME 0 one-0 runn 0 65536 aquila01 00 0:00:02
The STAT field tells the state of the virtual machine. If there is an runn state, your virtual machine is up and running. Depending on how you set up your image, you may be aware of it's IP address. If that is the case you can try now and log into the VM. Keep that connection alive in another terminal so we can check the live migrate. This migration ought to occur with no apparent downtime.
To perform a live migration we use yet again the onevm
command. Let's move the VM (with VID=0) to aquila02 (HID=1):
$> onevm livemigrate 0 1
This will move the VM from aquila01 to aquila02. Then, your onevm list
should show something like the following if all went smooth:
ID NAME STAT CPU MEM HOSTNAME TIME 0 one-0 runn 0 65536 aquila02 00 0:00:06
The last test to verify the correctness of this live migration is to make sure the ssh connection to the VM is still open. If that is the case, you have succeeded in completing this simple usage scenario.