vOneCloud 1.4 Released! Cloudify vSphere Infrastructures

We want you to know that OpenNebula Systems have just announced the availability of vOneCloud version, 1.4.

Several exciting features have been introduced in vOneCloud 1.4. The appliance that helps you turn your vSphere infrastructure into a private cloud generates daily reports that can be consulted by every user to check their resource consumption, with associated costs defined by the Cloud Administrator. The Virtual Datacenter provisioning model has been revisited to enable resource sharing easily among different groups, as well as to simplify configuration. The interfaces has also been improved to smooth the workflow of importing vCenter resources via the vOneCloud web interface, Sunstone. But probably most importantly, vOneCloud 1.4 add multi-vm management capabilities, enabling the management of sets of interconnected VMs (services), including the ability to set up elasticity rules to automatically increase or decrease the number of nodes composing a service, according to easily programmed rules that take into account the service demands.

Improvements were also in place for Control Panel, a web interface that eases the configuration of vOneCloud services and enables one click smooth upgrades to newer versions, introducing features to aid in the troubleshooting of the appliance.

The above features and components add to the already present ability to expose a multi-tenant cloud-like provisioning layer through the use of virtual datacenters, self-service portal, or hybrid cloud computing to connect in-house vCenter infrastructures with public clouds. vOneCloud seamlessly integrates with running vCenter virtualized infrastructures, leveraging advanced features such as vMotion, HA or DRS scheduling provided by the VMware vSphere product family.

vOneCloud is zero intrusive, try it out with without the need to commit to it. If you happen to don’t like it  just remove the appliance!

Relevant Links


New Open Cloud Reference Architecture

We are excited to announce the release of the first version of the Open Cloud Reference Architecture. The OpenNebula Reference Architecture is a blueprint to guide IT architects, consultants, administrators and field practitioners in the design and deployment of public and private clouds fully based on open-source platforms and technologies. This Reference Architecture has been created from the collective information and experiences from hundreds of users and cloud client engagements. Besides main logical components and interrelationships, this reference documents software products, configurations, and requirements of infrastructure platforms recommended for a smooth OpenNebula installation. Three optional functionalities complete the architecture: high availability, cloud bursting for workload outsourcing, and federation of geographically dispersed data centers.

The document describes the reference architecture for Basic (small to medium-scale) and Advanced (medium to large-scale) OpenNebula Clouds and provides recommended software for main architectural components, and the rationale behind the recommendations. Each section also provides information about other open-source infrastructure platforms tested and certified by OpenNebula to work in enterprise environments. To complement these certified components, the OpenNebula add-on catalog can be browsed for other options supported by the community and partners. Moreover, there are other components in the open cloud ecosystem that are not part of the reference architecture, but are nonetheless important to consider at the time of designing a cloud, like for example Configuration Management and Automation Tools for configuring cloud infrastructure and manage large number of devices.

You can download a copy from the Jumpstart Packages page at the OpenNebula Systems web site.

Thank you!

Installation of HA OpenNebula on CentOS 7 with Ceph as a datastore and IPoIB as backend network


This article is exploring the process of installing HA OpenNebula and Ceph as datastore on three nodes (disks – 6xSSD 240GB, backend network IPoIB, OS CentOS 7) and using one additional node for backup.

Scheme of equipment below:

We are using this solution for virtualization of our imagery processing servers.


All actions should be performed on all nodes. For kosmo-arch all except bridge-utils and FrontEnd network.

yum install bridge-utils

FrontEnd network.

Configure bond0 (mode0) and start script below to create frontend interface for VMs (OpenNebula)

cd /etc/sysconfig/network-scripts
if [ ! -f ifcfg-nab1 ]; then
cp -p ifcfg-$Device bu-ifcfg-$Device
  echo -e "DEVICE=$Device\nTYPE=Ethernet\nBOOTPROTO=none\nNM_CONTROLLED=no\nONBOOT=yes\nBRIDGE=nab1" > ifcfg-$Device
    grep ^HW bu-ifcfg-$Device >> ifcfg-$Device
      echo -e "DEVICE=nab1\nNM_CONTROLLED=no\nONBOOT=yes\nTYPE=bridge" > ifcfg-nab1 
        egrep -v "^#|^DEV|^HWA|^TYP|^UUI|^NM_|^ONB" bu-ifcfg-$Device >> ifcfg-nab1

BackEnd network. Configuration of IPoIB:

yum groupinstall -y "Infiniband Support"
yum install opensm

Enable IPoIB and switch infiniband to connected mode. This Link about differences of connected or datagram modes.

 cat /etc/rdma/rdma.conf
# Load IPoIB
# Setup connected mode

Start Infiniband services.

systemctl enable rdma opensm
systemctl start rdma opensm

Check of working


hca_id: mlx4_0
      transport:                      InfiniBand (0)
      fw_ver:                         2.7.000
      node_guid:                      0025:90ff:ff07:3368
      sys_image_guid:                 0025:90ff:ff07:336b
      vendor_id:                      0x02c9
      vendor_part_id:                 26428
      hw_ver:                         0xB0
      board_id:                       SM_1071000001000
      phys_port_cnt:                  2
              port:   1
                      state:                  PORT_ACTIVE (4)
                      max_mtu:                4096 (5)
                      active_mtu:             4096 (5)
                      sm_lid:                 8
                      port_lid:               4
                      port_lmc:               0x00
                      link_layer:             InfiniBand
              port:   2
                      state:                  PORT_ACTIVE (4)
                      max_mtu:                4096 (5)
                      active_mtu:             4096 (5)
                      sm_lid:                 4
                      port_lid:               9
                      port_lmc:               0x00
                      link_layer:             InfiniBand


CA: kosmo-virt1 mlx4_0:
    0x002590ffff073385     13    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2   10[  ] "Infiniscale-IV Mellanox Technologies" ( )
Switch: 0x0002c90200482d08 Infiniscale-IV Mellanox Technologies:
         2    1[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    2[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    1[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    4[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    5[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    6[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    7[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    8[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    9[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   10[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>      13    1[  ] "kosmo-virt1 mlx4_0" ( )
         2   11[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       4    1[  ] "kosmo-virt2 mlx4_0" ( )
         2   12[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   13[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   14[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   15[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   16[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   17[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   18[  ] ==(                Down/ Polling)==>             [  ] "" ( )
CA: kosmo-virt2 mlx4_0:
    0x002590ffff073369      4    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2   11[  ] "Infiniscale-IV Mellanox Technologies" ( )

Setup bond1 (mode1) of two IB interfaces. Set up IP 172.19.254.X where X is node number. Example below:

 cat /etc/modprobe.d/bonding.conf
 alias bond0 bonding
 alias bond1 bonding
 cat /etc/sysconfig/network-scripts/ifcfg-bond1
 BONDING_OPTS="mode=1 miimon=500 primary=ib0"

Disable firewall

Tuning sysctl.

net.ipv4.tcp_mem=16777216 16777216 16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216

Installing Ceph.


Configure passwordless access between nodes for user root. The key shoud be created on one node and then copy to other to /root/.ssh/.

ssh-keygen -t dsa (creation of passwordless key)
cd /root/.ssh
cat id_dsa.pub >> authorized_keys
chown root.root authorized_keys
chmod 600 authorized_keys
echo "StrictHostKeyChecking no" > config

Disable Selinux on all nodes

In /etc/selinux/config

setenforce 0

Add max open files to /etc/security/limits.conf (depends on your requirements) on all nodes

  • hard nofile 1000000
  • soft nofile 1000000

Setup /etc/hosts on all nodes: kosmo-virt1 kosmo-virt2 kosmo-virt3 kosmo-arch kosmo-virt1 kosmo-virt2 kosmo-virt3 kosmo-arch


Install kernel >3.15 on all nodes (That is needed for using cephFS client)

rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml -y

Set up new kernel for booting.

grep ^menuentry /boot/grub2/grub.cfg 
grub2-set-default 0 # number of our kernel
grub2-editenv list
grub2-mkconfig -o /boot/grub2/grub.cfg


Set up repository: (on all nodes)

 cat << EOT > /etc/yum.repos.d/ceph.repo
 name=Ceph packages for $basearch
 name=Ceph noarch packages

Import gpgkey: (on all nodes)

 rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'

Setup ntpd. (on all nodes)

yum install ntp

Editing /etc/ntp.conf and start ntpd. (on all nodes)

systemctl enable ntpd
systemctl start ntpd

Install: (on all nodes)

yum install libunwind -y
yum install -y  ceph-common ceph ceph-fuse ceph-deploy


(on kosmo-virt1) 
cd /etc/ceph
ceph-deploy new kosmo-virt1 kosmo-virt2 kosmo-virt3

MON deploying: (on kosmo-virt1)

ceph-deploy  mon create-initial

OSD deploying:

(on kosmo-virt1)

 cd /etc/ceph
 ceph-deploy gatherkeys kosmo-virt1
 ceph-deploy disk zap kosmo-virt1:sdb
 ceph-deploy osd prepare kosmo-virt1:sdb
 ceph-deploy disk zap kosmo-virt1:sdc
 ceph-deploy osd prepare kosmo-virt1:sdc
 ceph-deploy disk zap kosmo-virt1:sdd
 ceph-deploy osd prepare kosmo-virt1:sdd
 ceph-deploy disk zap kosmo-virt1:sde
 ceph-deploy osd prepare kosmo-virt1:sde
 ceph-deploy disk zap kosmo-virt1:sdf
 ceph-deploy osd prepare kosmo-virt1:sdf
 ceph-deploy disk zap kosmo-virt1:sdg
 ceph-deploy osd prepare kosmo-virt1:sdg

(on kosmo-virt2)

 cd /etc/ceph
 ceph-deploy gatherkeys kosmo-virt2
 ceph-deploy disk zap kosmo-virt2:sdb
 ceph-deploy osd prepare kosmo-virt2:sdb
 ceph-deploy disk zap kosmo-virt2:sdc
 ceph-deploy osd prepare kosmo-virt2:sdc
 ceph-deploy disk zap kosmo-virt2:sdd
 ceph-deploy osd prepare kosmo-virt2:sdd
 ceph-deploy disk zap kosmo-virt2:sde
 ceph-deploy osd prepare kosmo-virt2:sde
 ceph-deploy disk zap kosmo-virt2:sdf
 ceph-deploy osd prepare kosmo-virt2:sdf
 ceph-deploy disk zap kosmo-virt2:sdg
 ceph-deploy osd prepare kosmo-virt2:sdg

(on kosmo-virt3)

 cd /etc/ceph
 ceph-deploy gatherkeys kosmo-virt3
 ceph-deploy disk zap kosmo-virt3:sdb
 ceph-deploy osd prepare kosmo-virt3:sdb
 ceph-deploy disk zap kosmo-virt3:sdc
 ceph-deploy osd prepare kosmo-virt3:sdc
 ceph-deploy disk zap kosmo-virt3:sdd
 ceph-deploy osd prepare kosmo-virt3:sdd
 ceph-deploy disk zap kosmo-virt3:sde
 ceph-deploy osd prepare kosmo-virt3:sde
 ceph-deploy disk zap kosmo-virt3:sdf
 ceph-deploy osd prepare kosmo-virt3:sdf
 ceph-deploy disk zap kosmo-virt3:sdg
 ceph-deploy osd prepare kosmo-virt3:sdg

where sd[b-g] – SSD disks.

MDS deploying:

New giant version of ceph doesn’t have osd pool data and metadata
Use ceph osd lspools to check.

 ceph osd pool create data 1024
 ceph osd pool set data min_size 1
 ceph osd pool set data size 2
 ceph osd pool create metadata 1024
 ceph osd pool set metadata min_size 1
 ceph osd pool set metadata size 2

Check pool id of data and metadata with

 ceph osd lspools

Configure FS

 ceph mds newfs 4 3 --yes-i-really-mean-it

where 4 – id metadata pool, 3 – id metadata pool

Configure MDS

(on kosmo-virt1)

 cd /etc/ceph
 ceph-deploy mds create kosmo-virt1

(on kosmo-virt2)

 cd /etc/ceph
 ceph-deploy mds create kosmo-virt2

(on all nodes)

 chkconfig ceph on

Configure kosmo-arch.

Copy /etc/ceph.conf and /etc/ceph.client.admin.keyring from any of kosmo-virt to kosmo-arch

Preparing Ceph for OpenNebula.

Create pool:

 ceph osd pool create one 4096
 ceph osd pool set one min_size 1
 ceph osd pool set one size 2

Setup authorization to pool one:

 ceph auth get-or-create client.oneadmin mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=one' > /etc/ceph/ceph.client.oneadmin.keyring

Get key from keyring:

  cat /etc/ceph/ceph.client.oneadmin.keyring | grep key | awk '{print $3}' >>  /etc/ceph/oneadmin.key


 ceph auth list

Copy /etc/ceph/ceph.client.oneadmin.keyring and /etc/ceph/oneadmin.key to the second node.

Preparing for Opennebula HA

Configuring MariaDB cluster

Configure MariaDB cluster on all nodes except kosmo-arch

Setup repo:

 cat << EOT > /etc/yum.repos.d/mariadb.repo
 name = MariaDB
 baseurl = http://yum.mariadb.org/10.0/centos7-amd64


 yum install MariaDB-Galera-server MariaDB-client rsync galera

start service:

 service mysql start
 chkconfig mysql on

prepare for cluster:

 mysql -p
 GRANT USAGE ON *.* to sst_user@'%' IDENTIFIED BY 'PASS';
 GRANT ALL PRIVILEGES on *.* to sst_user@'%';
 service mysql stop

configuring cluster: (for kosmo-virt1)

 cat << EOT > /etc/my.cnf
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 wsrep_node_address='' # setup real node ip
 wsrep_node_name='kosmo-virt1' #  setup real node name

(for kosmo-virt2)

 cat << EOT > /etc/my.cnf
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 wsrep_node_address='' # setup real node ip
 wsrep_node_name='kosmo-virt2' #  setup real node name

(for kosmo-virt3)

 cat << EOT > /etc/my.cnf
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 wsrep_node_address='' # setup real node ip
 wsrep_node_name='kosmo-virt3' #  setup real node name

(on kosmo-virt1)

 /etc/init.d/mysql start --wsrep-new-cluster

(on kosmo-virt2)

 /etc/init.d/mysql start

(on kosmo-virt3)

 /etc/init.d/mysql start

check on all nodes:

 mysql -p
 show status like 'wsrep%';

| Variable_name | Value | +——————————+————————————–+

wsrep_local_state_uuid 739895d5-d6de-11e4-87f6-3a3244f26574
wsrep_protocol_version 7
wsrep_last_committed 0
wsrep_replicated 0
wsrep_replicated_bytes 0
wsrep_repl_keys 0
wsrep_repl_keys_bytes 0
wsrep_repl_data_bytes 0
wsrep_repl_other_bytes 0
wsrep_received 6
wsrep_received_bytes 425
wsrep_local_commits 0
wsrep_local_cert_failures 0
wsrep_local_replays 0
wsrep_local_send_queue 0
wsrep_local_send_queue_max 1
wsrep_local_send_queue_min 0
wsrep_local_send_queue_avg 0.000000
wsrep_local_recv_queue 0
wsrep_local_recv_queue_max 1
wsrep_local_recv_queue_min 0
wsrep_local_recv_queue_avg 0.000000
wsrep_local_cached_downto 18446744073709551615
wsrep_flow_control_paused_ns 0
wsrep_flow_control_paused 0.000000
wsrep_flow_control_sent 0
wsrep_flow_control_recv 0
wsrep_cert_deps_distance 0.000000
wsrep_apply_oooe 0.000000
wsrep_apply_oool 0.000000
wsrep_apply_window 0.000000
wsrep_commit_oooe 0.000000
wsrep_commit_oool 0.000000
wsrep_commit_window 0.000000
wsrep_local_state 4
wsrep_local_state_comment Synced
wsrep_cert_index_size 0
wsrep_causal_reads 0
wsrep_cert_interval 0.000000
wsrep_evs_repl_latency 0/0/0/0/0
wsrep_evs_state OPERATIONAL
wsrep_gcomm_uuid 7397d6d6-d6de-11e4-a515-d3302a8c2342
wsrep_cluster_conf_id 2
wsrep_cluster_size 2
wsrep_cluster_state_uuid 739895d5-d6de-11e4-87f6-3a3244f26574
wsrep_cluster_status Primary
wsrep_connected ON
wsrep_local_bf_aborts 0
wsrep_local_index 0
wsrep_provider_name Galera
wsrep_provider_vendor Codership Oy info@codership.com
wsrep_provider_version 25.3.9(r3387)
wsrep_ready ON
wsrep_thread_count 2


Creating user and database:

mysql -p
create database opennebula;
GRANT USAGE ON opennebula.* to oneadmin@'%' IDENTIFIED BY 'PASS';
GRANT ALL PRIVILEGES on opennebula.* to oneadmin@'%';

Remember, if all nodes will be down, actual node must be started with /etc/init.d/mysql start –wsrep-new-cluster. You should find an actual node. If you start node with not actual view, other nodes will issue error (see logs) – [ERROR] WSREP: gcs/src/gcs_group.cpp:void group_post_state_exchange(gcs_group_t*)():319: Reversing history: 0 → 0, this member has applied 140536161751824 more events than the primary component.Data loss is possible. Aborting.

Configuring HA cluster

Unfortunately pcs cluster conflicts with Opennebula server. That’s why will go with pacemaker,corosync and crmsh.

Installing HA

Set up repo on all nodes except kosmo-arch:

 cat << EOT > /etc/yum.repos.d/network\:ha-clustering\:Stable.repo
 name=Stable High Availability/Clustering packages (CentOS_CentOS-7)

Install on all nodes except kosmo-arch:

 yum install corosync pacemaker crmsh resource-agents -y

On kosmo-virt1 create configuration

 vi /etc/corosync/corosync.conf
 totem {
 version: 2  
 secauth: off
 cluster_name: cluster
 transport: udpu
 nodelist {
 node {
      ring0_addr: kosmo-virt1
      nodeid: 1
 node {
      ring0_addr: kosmo-virt2
      nodeid: 2
  node {
      ring0_addr: kosmo-virt3
      nodeid: 3
 quorum {
 provider: corosync_votequorum
 logging {
 to_syslog: yes

and create authkey on kosmo-virt1

 cd /etc/corosync

Copy corosync and authkey to kosmo-virt2 and kosmo-virt3

Enabling (on all nodes except kosmo-arch):

 systemctl enable pacemaker corosync

Starting (on all nodes except kosmo-arch):

 systemctl start pacemaker corosync


 crm status
 Last updated: Mon Mar 30 18:33:14 2015
 Last change: Mon Mar 30 18:23:47 2015 via crmd on kosmo-virt2
 Stack: corosync
 Current DC: kosmo-virt2 (2) - partition with quorum
 Version: 1.1.10-32.el7_0.1-368c726
 3 Nodes configured
 0 Resources configured
 Online: [ kosmo-virt1 kosmo-virt2 kosmo-virt3]

add properies

crm configure property stonith-enabled=false
crm configure property no-quorum-policy=stop

Installing Opennebula


Setup repo on all nodes except kosmo-arch:

 cat << EOT > /etc/yum.repos.d/opennebula.repo

Installing (on all nodes except kosmo-arch):

 yum install -y opennebula-server opennebula-sunstone opennebula-node-kvm qemu-img qemu-kvm

Ruby Runtime Installation:


Change password oneadmin:

 passwd oneadmin

Create passworless access for oneadmin (on kosmo-virt1):

 su oneadmin
 cd ~/.ssh
 ssh-keygen -t dsa
 cat id_dsa.pub >> authorized_keys
 chown oneadmin:oneadmin authorized_keys
 chmod 600 authorized_keys
 echo "StrictHostKeyChecking no" > config

Copy to other nodes (remember that oneadmin home directory is /var/lib/one).

Change listen for sunstone-server (on all nodes):

 sed -i 's/host:\ 127\.0\.0\.1/host:\ 0\.0\.0\.0/g' /etc/one/sunstone-server.conf

on kosmo-virt1:

copy all /var/lib/one/.one/*.auth and one.key files to OTHER_NODES:/var/lib/one/.one/

Start stop services on kosmo-virt1:

 systemctl start opennebula opennebula-sunstone

Try to connect to http://node:9869.
Check logs for errors (/var/log/one/oned.log /var/log/one/sched.log /var/log/one/sunstone.log).
If no errors:

 systemctl stop opennebula opennebula-sunstone

Add ceph support for qemu-kvm for all nodes except kosmo-arch

 qemu-img -h | grep rbd
 /usr/libexec/qemu-kvm --drive format=? | grep rbd

if there is no rbd support than you have to compile and install:



 yum groupinstall -y "Development Tools"
 yum install -y yum-utils rpm-build
 yumdownloader --source qemu-kvm
 rpm -ivh qemu-kvm-1.5.3-60.el7_0.11.src.rpm


 cd ~/rpmbuild/SPEC
 vi qemu-kvm.spec

Change %define rhev 0 to %define rhev 1.

 rpmbuild -ba qemu-kvm.spec

Installing (for all nodes except kosmo-arch).

 rpm -e --nodeps libcacard-1.5.3-60.el7_0.11.x86_64
 rpm -e --nodeps qemu-img-1.5.3-60.el7_0.11.x86_64
 rpm -e --nodeps qemu-kvm-common-1.5.3-60.el7_0.11.x86_64
 rpm -e --nodeps qemu-kvm-1.5.3-60.el7_0.11.x86_64
 rpm -ivh libcacard-rhev-1.5.3-60.el7.centos.11.x86_64.rpm
 rpm -ivh qemu-img-rhev-1.5.3-60.el7.centos.11.x86_64.rpm
 rpm -ivh qemu-kvm-common-rhev-1.5.3-60.el7.centos.11.x86_64.rpm
 rpm -ivh qemu-kvm-rhev-1.5.3-60.el7.centos.11.x86_64.rpm

Check for ceph support.

 qemu-img -h | grep rbd
 Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd iscsi gluster gluster gluster gluster dmg cow cloop bochs blkverify    blkdebug
 /usr/libexec/qemu-kvm --drive format=? | grep rbd
 Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd iscsi gluster gluster gluster gluster dmg cow cloop bochs blkverify blkdebug

Try to write image (for all nodes except kosmo-arch):

 qemu-img create -f rbd rbd:one/test-virtN 10G

where N node number.

Add ceph support for libvirt

On all nodes:

 systemctl enable messagebus.service
 systemctl start messagebus.service
 systemctl enable libvirtd.service
 systemctl start libvirtd.service

On kosmo-virt1 create uuid:


Create secret.xml

 cat > secret.xml <<EOF
 <secret ephemeral='no' private='no'>
 <usage type='ceph'>
 <name>client.oneadmin AQDp1aqz+JPAJhAAIcKf/Of0JfpJRQvfPLqn9Q==</name>

Where AQDp1aqz+JPAJhAAIcKf/Of0JfpJRQvfPLqn9Q== is cat /etc/ceph/oneadmin.key.
Copy secret.xml to other nodes.

Add key to libvirt (for all nodes except kosmo-arch)

 virsh secret-define --file secret.xml
 virsh secret-set-value --secret virsh secret-set-value --base64 $(cat /etc/ceph/oneadmin.key)


 virsh secret-list
 UUID                                 Usage
 cfb34c4b-d95c-4abc-a4cc-f8a2ae532cb5 ceph client.oneadmin AQDp1aqz+JPAJhAAIcKf/Of0JfpJRQvfPLqn9Q==

Restart libvirtd:

 systemctl restart libvirtd.service

Convering database to mysql:

Downloading script:

 wget http://www.redmine.org/attachments/download/6239/sqlite3-to-mysql.py


 sqlite3 /var/lib/one/one.db .dump | ./sqlite3-to-mysql.py > mysql.sql   
 mysql -u oneadmin -p opennebula < mysql.sql

Change /etc/one/oned.conf from

 DB = [ backend = "sqlite" ]


 DB = [ backend = "mysql",
      server  = "localhost",
      port    = 0,
      user    = "oneadmin",
      passwd  = "PASS",
      db_name = "opennebula" ]

Copy oned.conf to other nodes as root except kosmo-arch.

Check kosmo-virt2 and kosmo-virt3 nodes in turn:

   systemctl start opennebula opennebula-sunstone

check logs for errors (/var/log/one/oned.log /var/log/one/sched.log /var/log/one/sunstone.log)

   systemctl start opennebula opennebula-sunstone

Creating HA resources

On all nodes except kosmo-arch:

 systemctl disable opennebula opennebula-sunstone opennebula-novnc

From any of the nodes except kosmo-arch:

 primitive ClusterIP ocf:heartbeat:IPaddr2 params ip="" cidr_netmask="24" op monitor interval="30s"
 primitive opennebula_p systemd:opennebula \
 op monitor interval=60s timeout=20s \
 op start interval="0" timeout="120s" \
 op stop  interval="0" timeout="120s" 
 primitive opennebula-sunstone_p systemd:opennebula-sunstone \
 op monitor interval=60s timeout=20s \
 op start interval="0" timeout="120s" \
 op stop  interval="0" timeout="120s" 
 primitive opennebula-novnc_p systemd:opennebula-novnc \
 op monitor interval=60s timeout=20s \
 op start interval="0" timeout="120s" \
 op stop  interval="0" timeout="120s" 
 group Opennebula_HA ClusterIP opennebula_p opennebula-sunstone_p  opennebula-novnc_p


 crm status
 Last updated: Tue Mar 31 16:43:00 2015
 Last change: Tue Mar 31 16:40:22 2015 via cibadmin on kosmo-virt1
 Stack: corosync
 Current DC: kosmo-virt2 (2) - partition with quorum
 Version: 1.1.10-32.el7_0.1-368c726
 3 Nodes configured
 4 Resources configured
 Online: [ kosmo-virt1 kosmo-virt2 kosmo-virt3 ]
 Resource Group: Opennebula_HA
   ClusterIP  (ocf::heartbeat:IPaddr2):       Started kosmo-virt1
   opennebula_p       (systemd:opennebula):   Started kosmo-virt1
   opennebula-sunstone_p      (systemd:opennebula-sunstone):  Started kosmo-virt1
   opennebula-novnc_p (systemd:opennebula-novnc):     Started kosmo-virt1

Configuring OpenNebula

http://active_node:9869 – web management.

With web management. 1. Create Cluster. 2. Add hosts (using networks).

Console management.

3. Add net. (su oneadmin)

 cat << EOT > def.net
 NAME    = "Shared LAN"
 # Now we'll use the host private network (physical)
 BRIDGE  = nab0
 onevnet create def.net

4. Create image rbd datastore. (su oneadmin)

 cat << EOT > rbd.conf
 NAME = "cephds"
 DS_MAD = ceph
 TM_MAD = ceph
 POOL_NAME = one
 CEPH_SECRET ="cfb34c4b-d95c-4abc-a4cc-f8a2ae532cb5" #uuid key, looked at libvirt authentication for ceph
 CEPH_USER = oneadmin
 onedatastore create rbd.conf

5. Create system ceph datastore.

check last id number – N.

onedatastore list

on all nodes create directory and mount ceph

mkdir /var/lib/one/datastores/N+1
echo "172.19.254.K:6789:/ /var/lib/one/datastores/N+1 ceph rw,relatime,name=admin,secret=AQB4jxJV8PuhJhAAdsdsdRBkSFrtr0VvnQNljBw==,nodcache 0 0 # see secret in /etc/ceph/ceph.client.admin.keyring" >> /etc/fstab
mount /var/lib/one/datastores/N+1

where K= IP of curent node.

From one node change permitions:

chown oneadmin:oneadmin /var/lib/one/datastores/N+1

Create system ceph datastore (su oneadmin):

 cat << EOT > sys_fs.conf
 NAME    = system_ceph
 TM_MAD  = shared

 onedatastore create sys_fs.conf

6. Add nodes, vnets, datastories to created cluster with web management.


Here is official doc.
But one comment. I’m using migrate instead of recreate command.

  name      = "error",
  on        = "ERROR",
  command   = "host_error.rb",
  arguments = "$HID -m",
  remote    = no ]


Some words about backup.

Use persistent image type for this work scheme.

For BACKUP was used a single Linux server kosmo-arch (ceph client) with installed zfs on linux. For zpool set ZFS and deduplication on. (Remember that deduplication required about 2GB mem for 1TB storage space.)

Example of simple script that is starting by cron:

currdate=`/bin/date +%Y-%m-%0e`
olddate=`/bin/date --date="60 days ago" +%Y-%m-%0e`
imagelist="one-21" #space delimited list
for i in $imagelist
snapcurchk=`/usr/bin/rbd -p one ls | grep $i | grep $currdate`
snapoldchk=`/usr/bin/rbd -p one ls | grep $i | grep $currdate`
if test -z "$snapcurchk"
  /usr/bin/rbd snap create --snap $currdate one/$i
  /usr/bin/rbd export one/$i@$currdate /rbdback/$i-$currdate
  echo "current snapshot exist" 
if test -z "$snapoldchk"
   echo "old snapshot doesn't exist"
  /usr/bin/rbd snap rm one/$i@$olddate
  /bin/rm -f /rbdback/$i-$olddate

Use onevm utility or web-interface (see template) to know which image assigned to VM.

onevm list
onevm show "VM_ID" -a | grep IMAGE_ID


Don’t forget to change storage driver for VM to vda.(Drivers for windows). Without that you will face with low IO performance. (no more than 100 MB/s).
I saw 415MB/s with virtio drivers.


Maintenance Release – OpenNebula Cotton Candy 4.12.1

The OpenNebula team is proud to announce a new maintenance release of OpenNebula 4.12.1 Cotton Candy. This release comes with several bug fixes found after the 4.12 release. These bug fixes covers different OpenNebula components, like for instance the scheduler, the Cloud View self service portal, Sunstone web interface, OpenNebula Core and several drivers (VM, Auth, Network). Check the full list of bug fixes in the development portal.

Besides the bug fixes mentioned above, 4.12.1 includes several improvements:

If you haven’t had the chance so far to try OpenNebula 4.12, now is the time to download and install OpenNebula 4.12.1 Cotton Candy. As as highlight, find below the newly showback feature, which enables the generation of cost reports that can be integrated with chargeback and billing platforms:

OpenNebula Conf Call for Speakers Deadline Extended, April 15th

Due to the number of requests for extending the Call for Speakers we have moved the deadline to April 15th.  Speakers will receive free admission, which includes:

  • Attendance at all conference presentations
  • Attendance at pre-conference tutorials and hacking sessions
  • Coffee break during the morning and afternoon breaks
  • Lunch on both conference days
  • Dinner event on the first conference day
  • Tapas dinner on the pre-conference day
  • WiFi access
  • … and the opportunity to address a large audience of talented and influential cloud and open-source experts!

The third ever OpenNebula International Conference, will be held in Barcelona from the 20th to the 22nd of October 2015. As you may already know previous editions were a total success, with useful OpenNebula experiences masterly portrayed by people from Akamai, Produban -Santander Group-, BBC, FermiLab, ESA, Deloitte, CentOS, and many others.

Should you be interested, we would like to ask you to fill the Session Proposal Form before April 15th.

See you in Barcelona!


OpenNebula Newsletter – March 2015

This Newsletter is intended to OpenNebula users, developers and members of the community, and compiles the highlights of the OpenNebula project during this last month and what are the planned actions for the upcoming months.


The OpenNebula team released this month the latest stable release, 4.12 Cotton Candy. This is a stable release and so a recommended update for all production deployments. Cotton Candy comes with several improvements in different subsystems and components. OpenNebula is now able to generate cost reports that can be integrated with chargeback and billing platforms, and also presented to both the administrators and the end users.

Moreover Virtual Datacenters have been redefined as a new kind of OpenNebula resource. Making VDCs a separate resource has several advantages, for instance they can have one or more Groups added to them. This gives the Cloud Admin greater resource assignment flexibility.

Other perks of upgrading your installation to 4.12 include SPICE support, the excellent addition of Security Groups -allowing administrators to define the firewall rules and apply them to the Virtual Machines-, support for VXLAN, huge improvements in vCenter -import running VMs, network management, new vCenter cloud view, VM contextualization support, etc -,system datastore flushing, and many more minor features and important bugfixes. As usual, the migration path has been thoroughly designed and tested so updating to Cotton Candy from previous versions is a breeze. No excuses then for not bringing your OpenNebula to the latest state of the art in cloud management platforms!
Also this month a new release of vOneCloud, 1.2.1, the open replacement for VMware vCloud, was made available to the general public, meaning that all users without an active support subscription are able to upgrade through the Control Panel with a single click. If you are using vOneCloud 1.2, take this chance to get an improved version, including VLAN support through Sunstone, notifications of new releases, better log display at the Control Panel and more. And if you are still not using vOneCloud give it a try! We’ve packed 1.2.1 in an OVA for your convenience, just to keep you without excuses again :)


We certainly love our community, and it seems like you love us back! We are very proud of having Runtastic among our users, and when they explain why they chose us, we feel elated. They started with OpenNebula using the virtualization management features, and are continuously evolving towards more cloudy features. Way to go!

Our newly-launched machines are automatically included into Chef, and start doing their work within a minute

Good to know that OpenNebula clouds are expanding, this means they are healthy clouds. Like this one by bpsNode, expanding to Miami and Dallas. We are also excited by awesome user stories like this one featuring Altus IT with Lenovo delivering IT infrastructures in Croatia using OpenNebula. Way to go!

How good is your Russian? If you are fluent, enjoy the reasons of why Yuterra chose OpenNebula and Ceph for its Private Cloud. Moreover, fluent in German as well? Check out this OpenNebula webinar then.

Spreading the word is also something we deeply value from our community, hence we want to welcome the newly born Barcelona User Group! If you are in Barcelona, check it out, you won’t be disappointed. Also,we have like this example in FOSSAsia. And it is quite funny too, do not miss.

We love also this kind of feedback, how OpenNebula plays nice with other components in the ecosystem. Keeping our marketplace healthy and up to date is also kudos for the community, like this addition of ArchLinux to the catalog. Thanks!

A big thanks as well to all those members of the community that make possible to have a multi language Sunstone. This really foster adoption and we never could have done it without you! And last, but not least, it is very gratifying to see how OpenNebula helps build robust products like this one.


After the second edition of the OpenNebula Conference, we are already preparing for the upcoming, third edition in Barcelona, October 2015. Interested? You are still in time for getting a good price deal for tickets. If you want to share your OpenNebula experiences, the call for papers is open as well until the end of this month, so the clock is ticking, do not miss the chance!. Also, your company may be interested in the sponsorship opportunities for OpenNebulaConf 2015.

This last month, the OpenNebula project proudly sponsored a corner of the Open Cloud & Developer Park at the Cloud Expo Europe. During two intense days, members of the team gave several talks about the OpenNebula philosophy, design and features in the Park’s theatre. Also on board in the OpenNebula corner, our partners from CoudWeavers showed how OpenNebula do everything it does with a minimal footprint. The guys from viApps also did not missed the opportunity to be in a pod in the corner to tell the attendees about their integration and added value. OpenNebula Systems, the company behind OpenNebula, was also present in their own pod presenting vOneCloud, the product that turns your vCenter infrastructure into a private cloud. Also, Runtastic introduced us to the reasons why they chose OpenNebula over other Cloud Management Platforms, to build a cloud serving 50 million users. Impressive!


The TechDay in Prague was a total success, with a full house with high participating attendees and lots of juicy feedback. We plan to follow with other cities including Chicago, Dallas and Dublin. Send us an email or send it to the community discuss mailing list if you are interested in hosting a TechDay event.


As you may know, OpenNebula is participating in the BEACON project, flagship European project in federated cloud networking, due to this members of the team traveled to Brussels for the NetFutures15, to find synergies with other research projects.

During the following months, members of the OpenNebula team will be speaking in the following events:

If you are interested in receiving OpenNebula training, check the schedule for 2015 public classes at OpenNebula Headquarters. Please contact us if your would like to request training near you.

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.

Barcelona Opennebula User Group


As you know, the community of OpenNebula is an important pillar for the project. Opennebula community through the distribution lists and forums can express their questions, requests, or contribute with new ideas to the developers. This information is very useful and can contribute by helping other users or develop new features.

However, OpenNebula project thought in User Groups too. The OpenNebula User Groups are local communities, where users can discuss or share information and experiences in a more direct way across ‘town’. Getting a closer diffusion, and joining people who want to collaborate with the project.

Also, remember that this year (2015) the Opennebula annual conference travels from Berlin to Barcelona, ​​the ‘smartcity’ that will be the meeting point where developers, users, administrators, researchers, … can share experiences, case studies, etc.


For these reasons, some cloudadmins of Barcelona area have decided to create the Barcelona OpenNebula User Group. This group aims to be a small-scale community where we can discuss and find common objectives that support the project. We have created a website and a Google group where we will inform about first steps and work together in common goals.

In addition, and inside ONEBCN usergroup official presentations tour we will be next 5th of May on sudoers, a sysadmins group that meets regularly at the North Campus of the UPC.

It is a totally open group, so you are welcome!  First members of the Group:

Oriol Martí gabriel-verdejo-380x303 Angel Galindo Muñoz Xavier Peralta Ramos Jordi Guijarro Juan José Fuentes Miguel Ángel Flores Alex Vaqué

Some interesting links:

Cloudadmins Community Blog – http://www.cloudadmins.org

OneBCN Google Group – https://groups.google.com/forum/embed/?place=forum%2Fopennebula-barcelona-usergroup

Sudoers Barcelona – http://sudoers-barcelona.wikia.com/wiki/Sudoers_Barcelona_Wiki

vOneCloud 1.2.1 is Out!

A new version of vOneCloud, 1.2.1, has been released. This is an update to the previous stable version, 1.2, and it is an open release to the general public, meaning that you don’t need an active support subscription to access this upgrade.

This update is therefore available from the Control Panel with a single click. The Control Panel component will, behind the scenes:

  • Download the new vOneCloud packages
  • Install the new vOneCloud packages, keeping the existing configuration
  • Restart the OpenNebula service, with no downtime whatsoever to the currently running virtual machines

After the upgrade is performed, vOneCloud services would be up and running and updated to the latest version, which includes the following improvements:

  • Display logs in the Control Panel
  • Sunstone notifies the administrator user when there is a new release
  • Information of the newly available releases in the Control Panel
  • Better VLAN tagged Network handling in Sunstone

If you don’t have currently a running instance of vOneCloud, you can download an OVA with 1.2.1 already installed, you will need only to register in the vOneCloud support portal and visit this article.

Relevant Links

OpenNebula Conf 2015: Call for Speakers Reminder

As you may already know, this year OpenNebula Conf is taking place in Barcelona, Spain, on October 20-22. If you want to participate in this event and you have not submitted your talk yet, you have until March 31.

If you want to get an idea of the past OpenNebulaConf sessions, including talks from companies such as CentOS, Runtastic, Puppet Labs, Cloudweavers, RedHat, Deutsche Post, please check ourYoutube channel or download the presentations from our SlideShare account

Also we would like to remind you that the tickets are already available and if you buy your ticket before June 15th, you get the best discount of the year.

If you are interested on sponsoring this event, check out our sponsoring opportunities

Hope to see you there


Why Did We Choose OpenNebula for Runtastic?

Link to the original article at TheStack.com


Armin Deliomini is a Linux, virtualisation and database engineer at Austrian-based mobile fitness company Runtastic, and has made a rewarding journey from commercial cloud solutions – such as VMware and Oracle – in favour of completely open-source alternatives. Over the last two years Armin has implemented a private ecostructure for the Runtastic ecosystem and its 50 million users. Armin will be speaking today at Cloud Expo Europe taking place alongside Data Centre World, this week on the 11th and 12th March.

Since 2009 Runtastic has created apps, products and services for health and fitness tracking and management – a powerful infrastructure including around 300 virtual machines on thirty OpenNebula nodes, ensuring that 100 million downloaded apps and around 50 million registered users can access our services at any time.

We didn’t have a lot of time to decide on a technology to run our virtual environment. We had the classic vSphere environment in mind, but building an environment completely around opensource software spoke against a commercial virtual solution. We tested Ovirt, Proxmox and Openstack, the latter of which was very close, since we use Ubuntu in our overall infrastructure, and it was the most-hyped Opensource cloud solution at that time. A meeting with Tino Vasquez at the Netways booth at Cebit 2013 convinced us that OpenNebula was at least worth a thought.We set up a test installation; four months later our first production-grade OpenNebula cluster was fired up.

So why did we choose OpenNebula? Firstly, we liked the flexibility; in our business we don’t know exactly where the road ahead is leading. We had to find a technology that would grow with our needs and that we could adapt easily. We came at that time from a classic Virtualization background, with Vmware vSphere, and that was how we started with OpenNebula – classic virtual machines running on a hypervisor cluster that was managed by a central piece of software. But we also knew that this was not the future. OpenNebula gave us the comfort to start in a well known way, but at the same time gave us room to evolve.Our first set-up consisted of 16 KVM hosts and a Netapp storage serving NFS. In the beginning OpenNebula didn’t do more than give us an interface to start and stop machines, and to change their resource settings; but over time the situation developed very favourably. Our newly-launched machines are automatically included into Chef, and start doing their work within a minute. We also have the possibilty to start machines in external clouds in the event of resource shortages.

Our current projects are a new Cisco UCS Blade infrastructure operating as OpenNebula nodes, to lift our compute power to ~ 1000 cores, and Ceph as a future storage backend – a successor to our two NFS storages. We set up our first Ceph cluster in our preproduction environment recently.

We are no experts on OpenNebula, but then, we don’t have to be. It simply works…