Posts

OpenNebula – Securing Sunstone’s NoVNC connections with Secure Websocket and your own Certificate Authority

When dealing with NoVNC connections, I’ve faced some problems as a newbie, so today I’m sharing with you this post that may help you.

If you’re already using SSL to secure Sunstone’s access you could get an error when opening a VNC window: VNC Connection in progress”It’s quite possible that your browser is silently blocking the VNC connection using websockets. Reason? You’re using an https connection with Sunstone, but you’re trying to open an uncrypted websocket connection.

VNC_Connection_In_Progress

This is solved easily, just edit the following lines in the # UI Settings section in your /etc/one/sunstone-server.conf configuration file:

:vnc_proxy_support_wss: yes
:vnc_proxy_cert: /etc/one/certs/one-tornasol.crt
:vnc_proxy_key: /etc/one/certs/one-tornasol.key

We’ve just activated the secure websockets (wss) options and tell Sunstone where to find the SSL certificate and the key (if it’s not already included in the cert). Now, just restart your Sunstone server.

 

There’s another issue with VNC and SSL when using self-signed certificates. When running your own lab or using a development environment maybe you don’t have an SSL certificate signed by a real CA and you opt to use self-signed certificates which are quick and free to use… but this has some drawbacks

Trying to protect you from security threats, your Internet browser could have problems with secure websockets and self-signed certificates and messages like “VNC Disconnect timeout” and VNC Server disconnected (code: 1006)” could show.

VNC_Disconnected

In my labs I just use the openssl command (available in CentOS/Redhat and Debian/Ubuntu in the openssl package) to generate my own Certificate Authority certificate and sign the SSL certificates.

First we’ll create the /etc/one/certs directory in my Frontend and set the right owner:

mkdir -p /etc/one/certs
chown -R oneadmin:oneadmin /etc/one/certs

We’ll generate an RSA key with 2048 bits for the CA:

openssl genrsa -out /etc/one/certs/oneCA.key 2048

Now, we’ll produce the CA certificate using the key we’ve just created, and we’ll have to answer some questions to identify our CA (e.g my CA will be named ArtemIT Labs CA). Note that this CA certificate will be valid for 3650 days, 10 years!…

openssl req -x509 -new -nodes -key /etc/one/certs/oneCA.key -days 3650 -out /etc/one/certs/oneCA.pem

You are about to be asked to enter information that will be incorporated into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.
----
Country Name (2 letter code) [XX]:ES
State or Province Name (full name) []:Valladolid
Locality Name (eg, city) [Default City]:Valladolid
Organization Name (eg, company) [Default Company Ltd]:ArtemIT Labs
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:ArtemIT Labs CA
Email Address []:

Now, we already have a CA certificate and a key to sign SSL certificates. Time to generate the SSL certificate for WSS connections.

First, we’ll create the key for the Frontend, then we’ll generate the certificate answering some questions. In this example my Frontend server is called tornasol.artemit.local and I’ve set no challenge password for the certificate.

openssl genrsa -out /etc/one/certs/one-tornasol.key 2048


openssl req -new -key /etc/one/certs/one-tornasol.key -days 3650 -out /etc/one/certs/one-tornasol.csr

You are about to be asked to enter information that will be incorporated into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:ES
State or Province Name (full name) []:Valladolid
Locality Name (eg, city) [Default City]:Valladolid
Organization Name (eg, company) [Default Company Ltd]:ArtemIT Labs
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:tornasol.artemit.local
Email Address []:
Please enter the following 'extra' attributes to be sent with your certificate request
A challenge password []:
An optional company name []:

If everything is fine you’ll have the certs and keys under /etc/one/certs.

Now we’ll copy the oneCA.pem file to the computers where I’ll use my browser to open the Sunstone GUI.

In Firefox we’ll import the oneCA.pem (the CA certificate file) using Preferences -> Advanced -> Certificates -> Authorities tab checking all the options as shown in this image. If using Chrome under Linux it’s the same process when importing your CA cert.

trust_ca_firefox

If using IE or Chrome under Windows, change the extension from pem to crt, double-click the certificate and add the Certificate to the Trusted Root Certification Authorities storage. Some warnings will show, just accept them.

Once we trust our CA certificate, you can open your encrypted NoVNC windows.

Captura de pantalla de 2015-04-25 15:06:08

Free, quick and secure for your lab environment, but remember don’t do this in a production environment! 

Cheers!

OneVBox: New VirtualBox driver for OpenNebula

This new contribution to the OpenNebula Ecosystem expands OpenNebula by enabling the use of the well-known hypervisor VirtualBox to create and manage virtual machines.

OneVBox supports the upcoming OpenNebula 3.0 (currently in beta) and VirtualBox 4.0. It is composed of several scripts, mostly written in Ruby, which interpret the XML virtual machine descriptions provided by OpenNebula and perform necessary actions in the VirtualBox node.

OneVBox can deploy but also save, restore and migrate VirtualBox VMs from one physical node to a different one.

Using the new OneVBox driver is very easy and can be done in a few steps:

  1. Download and install the driver. Run from the driver folder:
    user@frontend $> ./install.sh

    Make sure that you have permissions to write in the OpenNebula folders. $ONE_LOCATION can be used to define the self-contained install path, otherwise it will be installed in system-wide mode.

  2. Enable the plugin. Put this in the oned.conf file and start OpenNebula: [shell]
    IM_MAD = [
    name = "im_vbox",
    executable = "one_im_ssh",
    arguments = "-r 0 -t 15 vbox" ]

    VM_MAD = [
    name = "vmm_vbox",
    executable = "one_vmm_exec",
    arguments = "vbox",
    default = "vmm_exec/vmm_exec_vbox.conf",
    type = "xml" ]
    [/shell]

  3. Add a VirtualBox host. For example:
    oneadmin@frontend $> onehost create hostname im_vbox vmm_vbox tm_ssh

    OneVBox also includes ab OpenNebula Sunstone plugin that will enable adding VirtualBox hosts and creating VirtualBox VM templates from the web interface. In order to enable it just add the following lines to etc/sunstone-plugins.yaml:

    [shell]
    – user-plugins/vbox-plugin.js:
    :group:
    :ALL: true
    :user:
    [/shell]

    (Tip: When copy/pasting, avoid using tabs in YAML files, they’re not supported)

For more information, you can visit the OpenNebula Ecosystem page for OneVBox. If you have questions or problems, please let us know on the Ecosystem mailing list or open an issue in the OneVBox github tracker.

Setting up High Availability in OpenNebula with LVM

In this post, I will explain how to install OpenNebula on two servers in a fully redundant environment. This is the English translation of an article in Italian on my blog.

The idea is to have two Cloud Controllers in High Availability (HA) active/passive mode using Pacemaker/Heartbeat. These nodes will also provide storage by exporting a DRBD partition via ATA-Over-Ethernet; the VM disks will be created on logical LVM volumes in this partition. This solution, besides being totally redundant, will provide high-speed storage because we use snapshots to deploy the partitions of the VM, not using files on an NFS filesystem.

Nonetheless, we will still use NFS to export the /srv/cloud directory with OpenNebula data.

System Configuration

As a reference, this is the configuration of our own servers. Your servers do not have to be exactly the same; we will simply be using these two servers to explain certain aspects of the configuration.

First Server:

  • Linux Ubuntu 64-bit server 10.10
  • Cards eth0 and eth1 configured with IP 172.17.0.251 bonding network (SAN)
  • ETH2 card with IP 172.16.0.251 (LAN)
  • 1 TB internal HD partitioned as follows:
    • sda1: 40 GB mounted on /
    • sda2: 8 GB swap
    • sda3: 1 GB for metadata
    • sda5: 40 GB for /srv/cloud/one
    • sda6: 850 GB datastore

Secondary Server

  • Linux Ubuntu 64-bit server 10.10
  • Cards eth0 and eth1 configured with IP 172.17.0.252 bonding network (SAN)
  • ETH2 card with IP 172.16.0.252 (LAN)
  • 1 TB internal HD partitioned as follows:
    • sda1: 40 GB mounted on /
    • sda2: 8 GB swap
    • sda3: 1 GB for metadata
    • sda5: 40 GB for /srv/cloud/one
    • sda6: 850 GB datastore

Installing the base system

Install Ubuntu server 64-bit 10.10 on the two servers and enabling OpenSSH server during installation. In our case, the servers are each equipped with a double-disk 1TB SATA in hardware mirror, on which we will create a 40 GB partition (sda1) for the root filesystem, a 4 GB (sda2) for the swap, a third ( sda3) of 1 GB formetadata , a fourth (sda5) with 40 GB for the directory /srv/cloud/one replicated by DRBD, and a fifth (sda6) with the remaining space (approximately 850 GB) that will be used by DRBD for the export of VM filesystems.

In terms of network cards, we have a total of three network cards to each server: 2 (eth0, eth1) will be configured in bonding to manage data replication and communicate with the compute nodes in the cluster network (SAN) on the class 172.17.0.0/24 and a third (eth2) is used to access from outside the cluster on the LAN 172.16.0.0/24 with class.

Unless otherwise specified, these instructions are specific to the above two hosts, but should work on your own system with minor modifications.

Network Configuration

First we modify the hosts file:

/etc/hosts
172.16.0.250 cloud-cc.lan.local cloud-cc
172.16.0.251 cloud-cc01.lan.local
172.16.0.252 cloud-cc02.lan.local
172.17.0.1 cloud-01.san.local
172.17.0.2 cloud-02.san.local
172.17.0.3 cloud-03.san.local
172.17.0.250 cloud-cc.san.local
172.17.0.251 cloud-cc01.san.local cloud-cc01
172.17.0.252 cloud-cc02.san.local cloud-cc02

Next, we proceed to the configuration of the system. First configure the bonding interface, installing required packages:

apt-get install ethtool ifenslave

Then we load the module at startup with correct parameters creating file /etc/modprobe.d/bonding.conf

/etc/modprobe.d/bonding.conf
alias bond0
bonding options mode=0 miimon=100 downdelay=200 updelay=200

And configuring LAN:

/etc/network/interfaces
auto bond0
iface bond0 inet static
bond_miimon  100
bond_mode balance-rr
address  172.17.0.251 # 172.17.0.251 on server 2
netmask  255.255.255.0
up /sbin/ifenslave bond0 eth0 eth1
down /sbin/ifenslave -d bond0 eth0 eth1

auto eth2
iface eth2 inet static
address  172.16.0.251 # 172.16.0.252 on server 2
netmask  255.255.255.0

Configuring MySQL

I prefer to configure a MySQL circular replication rather than to manage the launch of the service through HeartBeat because MySQL is so fast in the opening; having been active on both servers, they save a few seconds during the switch in case of a fault.

First we install MySQL:

apt-get install mysql-server libmysqlclient16-dev libmysqlclient

and create the database for OpenNebula:

mysql -p
create database opennebula;
create user oneadmin identified by 'oneadmin';
grant all on opennebula.* to 'oneadmin'@'%';
exit;

Then we configure active/active replica on server 1:

/etc/mysql/conf.d/replica.cnf @ Server 1
[mysqld]
bind-address			= 0.0.0.0
server-id                       = 10
auto_increment_increment        = 10
auto_increment_offset           = 1
master-host                     = server2.dominio.local
master-user                     = replicauser
master-password                 = replicapass
log_bin				= /var/log/mysql/mysql-bin.log
binlog_ignore_db		= mysql

And on server 2:

/etc/mysql/conf.d/replica.cnf @ server 2
[mysqld]
bind-address			= 0.0.0.0
server-id                       = 20
auto_increment_increment        = 10
auto_increment_offset           = 2
master-host                     = server1.dominio.local
master-user                     = replicauser
master-password                 = replicapass
log_bin				= /var/log/mysql/mysql-bin.log
binlog_ignore_db		= mysql

Finally, on both servers, restart mysql and create replica user:

create user 'replicauser'@'%.san.local' identified by 'replicapass';
grant replication slave on *.* to 'replicauser'@'%.dominio.local';
start slave;
show slave status\G;

DRBD Configuration

Now is the turn of DRBD but configured in standard active/passive. First install the needed packages:

apt-get install drbd8 modprobe drbd-utils

So let’s edit the configuration file:

/etc/drbd.d/global_common.conf
global {
usage-count yes;
# minor-count dialog-refresh disable-ip-verification
}

common {
protocol C;

handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}

startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
wfc-timeout 120; ## 2 min
degr-wfc-timeout 120; ## 2 minutes.
}

disk {
# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
# no-disk-drain no-md-flushes max-bio-bvecs
on-io-error detach;
}

net {
# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
# allow-two-primaries;
# after-sb-0pri discard-zero-changes;
# after-sb-1pri discard-secondary;

timeout 60;
connect-int 10;
ping-int 10;
max-buffers 2048;
max-epoch-size 2048;
}

syncer {
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
rate 500M;
}
}

And let’s create one-disk definition:

/etc/drbd.d/one-disk.res 
resource one-disk {
    on cloud-cc01 {
	address 172.17.0.251:7791;
	device /dev/drbd1;
	disk /dev/sda5;
	meta-disk /dev/sda3[0];
    }
    on cloud-cc02 {
	address 172.17.0.252:7791;
	device /dev/drbd1;
	disk /dev/sda5;
	meta-disk /dev/sda3[0];
    }
}

and data-disk:

/etc/drbd.d/data-disk.res 
resource data-disk {
    on cloud-cc01 {
	address 172.17.0.251:7792;
	device /dev/drbd2;
	disk /dev/sda6;
	meta-disk /dev/sda3[1];
    }
    on cloud-cc02 {
	address 172.17.0.252:7792;
	device /dev/drbd2;
	disk /dev/sda6;
	meta-disk /dev/sda3[1];
    }
}

Now, on both nodes, we create the metadata disk:

drbdadm create-md one-disk
drbdadm create-md data-disk
/etc/init.d/drbd reload

Finally, only on server 1, activate the disk:

drbdadm -- --overwrite-data-of-peer primary one-disk
drbdadm -- --overwrite-data-of-peer primary data-disk

Exporting the disks

As already mentioned, the two DRBD partitions will be visible through the network, although in different ways: one-disk will be exported through NFS, data-disk will be exported by ATA-over-Ethernet and will present its LVM partitions to the hypervisor.

Install the packages:

apt-get install vblade nfs-common nfs-kernel-server nfs-common portmap

We’ll disable automatic NFS and AoE startup because we handle it via HeartBeat:

update-rc.d nfs-kernel-server disable
update-rc.d vblade disable

Then we create the export for OpenNebula directory:

/etc/exports
/srv/cloud/one          172.16.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)

and we create necessary directory:

mkdir -p /srv/cloud/one

Finally we have to set idmapd daemon to correctly propagate user and permission on network.

/etc/idmapd.conf
[General]

Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = lan.local # Modify this

[Mapping]

Nobody-User = nobody
Nobody-Group = nobody

Finally we have to configure default NFS settings:

/etc/default/nfs-kernel-server
NEED_SVCGSSD=no # no is default

and

/etc/default/nfs-common
NEED_IDMAPD=yes
NEED_GSSD=no # no is default

Fault Tolerant daemon configuration

There are two packages that can handle high available services on Linux: corosync and heartbeat. Personally I prefer heartbeat and provide instructions referring to this, but most configurations will be through the pacemaker, then you are perfectly free to opt for corosync.

First install the needed packages:

apt-get install heartbeat pacemaker

and configure heartbeat daemon:

/etc/ha.d/ha.cf
autojoin none
bcast bond0
warntime 3
deadtime 6
initdead 60
keepalive 1
node cluster-cc01
node cluster-cc02
crm respawn

Only on first server, we create the authkeys file, and will copy it on the second server:

( echo -ne "auth 1\n1 sha1 "; \
  dd if=/dev/urandom bs=512 count=1 | openssl md5 ) \
  > /etc/ha.d/authkeys
chmod 0600 /etc/ha.d/authkeys
scp /etc/ha.d/authkeys cloud-cc02:/etc/ha.d/
ssh cloud-cc02 chmod 0600 /etc/ha.d/authkeys
/etc/init.d/heartbeat restart
ssh cloud-02 /etc/init.d/heartbeat restart

After a minute or two, heartbeat will be online:

crm_mon -1 | grep Online
Online: [ cloud-cc0 cloud-cc02 ]

Now we’ll configure cluster services via pacemaker.
Setting default options:

crm configure
property no-quorum-policy=ignore
property stonith-enabled=false
property default-resource-stickiness=1000
commit
bye

The two shared IP 172.16.0.250 and 172.17.0.250:

crm configure
primitive lan_ip IPaddr params ip=172.16.0.250 cidr_netmask="255.255.255.0" nic="eth2" op monitor interval="40s" timeout="20s"
primitive san_ip IPaddr params ip=172.17.0.250 cidr_netmask="255.255.255.0" nic="bond0" op monitor interval="40s" timeout="20s"
commit
bye

The NFS export:

crm configure
primitive drbd_one ocf:linbit:drbd params drbd_resource="one-disk" op monitor interval="40s" timeout="20s"
ms ms_drbd_one drbd_one meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
commit
bye

The one-disk mount:

crm configure
primitive fs_one ocf:heartbeat:Filesystem params device="/dev/drbd/by-res/one-disk" directory="/srv/cloud/one" fstype="ext4"
commit
bye

The AoE export:

crm configure
primitive drbd_data ocf:linbit:drbd params drbd_resource="data-disk"  op monitor interval="40s" timeout="20s"
ms ms_drbd_data drbd_data meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
commit
bye

The data-disk mount:

crm configure
primitive aoe_data ocf:heartbeat:AoEtarget params device="/dev/drbd/by-res/data-disk" nic="bond0" shelf="0" slot=="0" op monitor interval="40s" timeout="20s"
commit
bye

Now we have to configure the correct order to startup services:

crm configure
group ha_group san_ip lan_ip fs_one nfs_one aoe_data
colocation ha_col inf: ha_group ms_drbd_one:Master ms_drbd_data:Master
order ha_after_drbd inf: ms_drbd_one:promote ms_drbd_data:promote ha_group:start
commit
bye

We will modify this configuration later to add OpenNebula and lighttpd startup.

LVM Configuration

LVM2 will allow us to create partitions for virtual machines and deploy it via snapshot basis.

Install the package on both machines.

apt-get install lvm2

We have to modify the filter configuration to allow lvm scan only to DRBD disk.

/etc/lvm/lvm.conf
...
filter = [ "a|drbd.*|", "r|.*|" ]
...
write_cache_state = 0

ATTENTION: Ubuntu uses a Ramdisk to bootup the system, so we have to modify also lvm.conf file inside ramdisk.

Now we remove the cache:

rm /etc/lvm/cache/.cache

Only on server 1 we have to create physical LVM volume and Volume Group:

pvcreate /dev/drbd/by-res/data-disk
vgcreate one-data /dev/drbd2

Install and configure OpenNebula

We are almost done. Now we download and install OpenNebula 2.2 via source:

First we have to install prerequisites:

apt-get install libsqlite3-dev libxmlrpc-c3-dev scons g++ ruby libopenssl-ruby libssl-dev ruby-dev make rake rubygems libxml-parser-ruby1.8 libxslt1-dev libxml2-dev genisoimage  libsqlite3-ruby libsqlite3-ruby1.8 rails thin
gem install nokogiri
gem install json
gem install sinatra
gem install rack
gem install thin
cd /usr/bin
ln -s rackup1.8 rackup

Then we have to create OpenNebula user and group:

groupadd cloud
useradd -d /srv/cloud/one  -s /bin/bash -g cloud -m oneadmin
chown -R oneadmin:cloud /srv/cloud/
chmod 775 /srv
id oneadmin # we have to use this id also on cluster node for oneadmin/cloud

Now we go in unpriviledged mode to create ssh certificate for cluster communications:

su - oneadmin
ssh-keygen # use default
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chown 640 ~/.ssh/authorized_keys
mkdir  ~/.one

We create a .profile file with default variables:

~/.profile
export ONE_AUTH='/srv/cloud/one/.one/one_auth'
export ONE_LOCATION='/srv/cloud/one'
export ONE_XMLRPC='http://localhost:2633/RPC2'
export PATH=$PATH':/srv/cloud/one/bin'

Now we have to create one_auth file to setup a default user inside OpenNebula (for example of api or sunstone):

~/.one/one_auth
oneadmin:password

And load default variables before compile:

source .profile

Now download and install OpenNebula:

cd
wget http://dev.opennebula.org/attachments/download/339/opennebula-2.2.tar.gz
tar zxvf opennebula-2.2.tar.gz
cd opennebula-2.2
scons -j2 mysql=yes
./install.sh -d /srv/cloud/one

About configuration: this is my oned.conf file, I use Xen HyperVisor, but you can use also KVM.

/src/cloud/one/etc/oned.conf
HOST_MONITORING_INTERVAL = 60

VM_POLLING_INTERVAL      = 60

VM_DIR=/srv/cloud/one/var

SCRIPTS_REMOTE_DIR=/var/tmp/one

PORT=2633

DB = [ backend = "mysql",
       server  = "localhost",
       port    = 0,
       user    = "oneadmin",
       passwd  = "oneadmin",
       db_name = "opennebula" ]

VNC_BASE_PORT = 5900

DEBUG_LEVEL=3

NETWORK_SIZE = 254

MAC_PREFIX   = "02:ab"

IMAGE_REPOSITORY_PATH = /srv/cloud/one/var/images
DEFAULT_IMAGE_TYPE    = "OS"
DEFAULT_DEVICE_PREFIX = "sd"

IM_MAD = [
    name       = "im_xen",
    executable = "one_im_ssh",
    arguments  = "xen" ]

VM_MAD = [
    name       = "vmm_xen",
    executable = "one_vmm_ssh",
    arguments  = "xen",
    default    = "vmm_ssh/vmm_ssh_xen.conf",
    type       = "xen" ]

TM_MAD = [
    name       = "tm_lvm",
    executable = "one_tm",
    arguments  = "tm_lvm/tm_lvm.conf" ]

HM_MAD = [
    executable = "one_hm" ]

VM_HOOK = [
    name      = "image",
    on        = "DONE",
    command   = "image.rb",
    arguments = "$VMID" ]

HOST_HOOK = [
    name      = "error",
    on        = "ERROR",
    command   = "host_error.rb",
    arguments = "$HID -r n",
    remote    = "no" ]

VM_HOOK = [
   name      = "on_failure_resubmit",
   on        = "FAILED",
   command   = "/usr/bin/env onevm resubmit",
   arguments = "$VMID" ]

The only important thing is to modify /srv/cloud/one/etc/tm_lvm/tm_lvm.rc setting default VG:

/srv/cloud/one/etc/tm_lvm/tm_lvm.rc
...
VG_NAME=one-data
...

Now copy the init.d script from source to /etc/init.d but not set it to startup ad boot.

I have modified the default script to startup also sunstone:

/etc/init.d/one
#! /bin/sh
### BEGIN INIT INFO
# Provides:          opennebula
# Required-Start:    $remote_fs
# Required-Stop:     $remote_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: OpenNebula init script
# Description:       OpenNebula cloud initialisation script
### END INIT INFO

# Author: Soren Hansen - modified my Alberto Zuin

PATH=/sbin:/usr/sbin:/bin:/usr/bin:/srv/cloud/one
DESC="OpenNebula cloud"
NAME=one
SUNSTONE=/srv/cloud/one/bin/sunstone-server
DAEMON=/srv/cloud/one/bin/$NAME
DAEMON_ARGS=""
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME

# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions

#
# Function that starts the daemon/service
#
do_start()
{
mkdir -p /var/run/one /var/lock/one
chown oneadmin /var/run/one /var/lock/one
su - oneadmin -s /bin/sh -c "$DAEMON start"
su - oneadmin -s /bin/sh -c "$SUNSTONE start"
}

#
# Function that stops the daemon/service
#
do_stop()
{
su - oneadmin -s /bin/sh -c "$SUNSTONE stop"
su - oneadmin -s /bin/sh -c "$DAEMON stop"
}

case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
restart|force-reload)
#
# If the "reload" option is implemented then remove the
# 'force-reload' alias
#
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
exit 3
;;
esac

:

and set it with execute permissions:

chmod 755 /etc/init.d/one

Configuring the HTTPS proxy for Sunstone

Sunstone is the web interface for Cloud administration, if you do not want to use the command line… works on port 4567 and is not encrypted, so we’ll use lighttpd for proxy requests to HTTPS encrypted connection.

First install the daemon:

apt-get install ssl-cert lighttpd

Then generate certificates:

/usr/sbin/make-ssl-cert generate-default-snakeoil
cat /etc/ssl/private/ssl-cert-snakeoil.key /etc/ssl/certs/ssl-cert-snakeoil.pem > /etc/lighttpd/server.pem

and create symlinks to enable ssl and proxy modules:

ln -s /etc/lighttpd/conf-available/10-ssl.conf /etc/lighttpd/conf-enabled/
ln -s /etc/lighttpd/conf-available/10-proxy.conf /etc/lighttpd/conf-enabled/

And modify lighttp setup to enable proxy to sunstone:

/etc/lighttpd/conf-available/10-proxy.conf
proxy.server               = ( "" =>
                                ("" =>
                		(
                                 "host" => "127.0.0.1",
                                 "port" => 4567
                                )
                                )
                            )

Starting LightHTTP and OpenNebula with heartbeat

Now add the startup script automatically to heartbeat. First all stop heartbeat on both servers:

crm node
standby cloud-cc01
standby cloud-cc02
bye

Then we can change the configuration:

crm configure
primitive OpenNebula lsb:one
primitive lighttpd lsb:lighttpd
delete ha_group
group ha_group san_ip lan_ip fs_one nfs_one aoe_data OpenNebula lighttpd
colocation ha_col inf: ha_group ms_drbd_one:Master ms_drbd_data:Master
order ha_after_drbd inf: ms_drbd_one:promote ms_drbd_data:promote ha_group:start
commit
bye

And startup the cluster again:

crm node
online cloud-cc01
online cloud-cc02
bye

That’s all folks!
Thanks,
Alberto Zuin – http://www.anzs.it