OpenNebula Sandbox: VMware-based OpenNebula Cloud
Do you want to build a VMware-based OpenNebula cloud for testing, developing or integration under 20 minutes?
OpenNebula Sandbox is a series of appliances plus quick guides to help you to quickly get an OpenNebula cloud up and running. This is useful when setting up pilot clouds, to quickly test new features. It is therefore intended for testers, early adopters, developers and also integrators.
This particular Sandbox is oriented to VMware based infrastructures willing to try out OpenNebula.
The appliance that complements this guide is a VMware ESX compatible Virtual Machine disk, which comes with a CentOS 6.2 minimal distribution with OpenNebula 3.4.1 pre-installed. The VM is called “OpenNebula 3.4.1 Front-End Centos 6.2” within the ESX hypervisor, and ships with a hostname (if not changed by DHCP server) of “ONE341”.
“Service Network”(see the figure of the following section).
“VM network”), with a fixed IP (172.16.33.1). We used a non common private network address to avoid collisions, but feel free to change this at your convenience.
OpenNebula has been configured specifically to deal with ESX servers. The following has been created to achieve a glimpse of a running OpenNebula cloud in the minimum time possible:
VMwareDS), a datastore that knows how to handle the vmdk format. More info here.
TinyCore-TestImage) registered in the VMware datastore. More info here.
SBvNet) configured for dynamic networking node with VMware. More info on virtual networking.
ready to be launched!. More info on templates.
First step is to get the appliance, which can be downloaded from the OpenNebula Marketplace.
The appliance comes with a .vmx file with the description of the VM containing the OpenNebula front-end. You will need to use the VI client (the executable file can be downloaded from any ESX web page, just browse to its IP address) from VMware to deploy this VM in your ESX hypervisor.
Once the VM has boot up, use the Console tab on the VI client to log in. The user “oneadmin” and password “opennebula” would do it. Possible tests to find out that everything is OK
ifconfigto find out the IP given by the DHCP Server.
onetemplate listto ensure that OpenNebula is up and running.
exportfsto find out if the NFS server is correctly configured.
All the passwords of the accounts involved are “opennebula”. This includes:
The infrastructure needs to be set up in a similar fashion as depicted in the figure.
In this guide it is assumed that at least two ESX hypervisors are available, one to host the front-end and one to be used as a worker node. There is no reason why just one ESX can be used to set up a pilot cloud (use the same ESX to host the OpenNebula front-end, and also to be used as worker node), although this guide assumes two for clarity sake.
This is probably the step that involves more work to get the pilot cloud up and running, but it is crucial to ensure its correct functioning. The appliance needs to be running prior to this. The ESX that are going to be used as worker nodes needs the following steps:
1) Creation of a oneadmin user. This will be used by OpenNebula to perform the VM related operations. In the VI client connected to the ESX host desired to be used as worker node, go to the “local Users & Groups” and add a new user like shown in the figure (the UID is important!). Afterwards, go to the “Permissions” tab and add “Admin” permissions to oneadmin.
2) Grant ssh access. Again in the VI client go to Configuration → Security Profile → Properties (Upper left). clicjk on the SSH label, and click “Start”. You can set it to start and stop with the host, as seen on the picture. Then the following needs to be done:
$ cat .ssh/id_rsa.pub
$ mkdir /etc/keys-oneadmin $ chmod 755 /etc/keys-oneadmin $ su - oneadmin $ vi /etc/keys-oneadmin/authorized_keys <paste here the contents of the clipboard and exit vi> $ chmod 600 /etc/keys-oneadmin/authorized_keys
3) Mount datastores. We need now to mount the two datastores exported by default by the appliance. Again in the VI client, go to Configuration → Storage → Add datastore (Upper left). We need to add two datastores (0 and 100). The picture shows the details for the 100 datastores, to add the 0 simply change the reference to 100 for 0 in the Folder and Datastore Name textboxes.
More info on datastores and different possible configurations.
The appliance ships with an OpenNebula configured as much as possible, the only extra step required is to add the ESX credentials in
/etc/one/vmwarerc. Please edit the file and set
:username: "oneadmin" :password: "password used for oneadmin in the above section"
Ok, so now that everything is in place, let's start using your brand new OpenNebula cloud! Use your browser to access Sunstone. The URL would be
Once you introduce the credentials for the “oneuser” user (remember, “opennebula” is the password) you will get to see the Sunstone dashboard. You can also log in as “oneadmin”, you will notice the access to more functionality (basically, the administration and physical infrastructure management tasks)
You will be able to see the pre-created resources. Check out the image in the “Virtual Resources/Images” tab, the template in “Virtual Resources/Templates” one and the virtual network in “Infrastructure/Virtual Networks”.
It is time to launch our first VM. This is a TinyCore based VM, that can be launched through the template. Please select it and click on the upper “Instantiate” button. If everything goes well, you should see the following the “Virtual Resources/Virtual Machines”:
Once the VM is in state RUNNING you can click on the VNC icon and you should see the TinyCore desktop
Let's also try and access through ssh. Open a terminal in the TinyCore desktop and type:
$ sudo passwd tc <set the password>
The TinyCore VM is already contextualized. If you click on the row representing the VM in sunstone, you can get the IP address assigned. Now, from a ssh session in the CentOS appliance (or any machine connected to the ESX worker node and in the 172.16.33.x network) you can get ssh access to the TinyCore VM:
$ ssh tc@<TinyCore-VM-IP>
Did we miss something? Please let us know!