Installation of HA OpenNebula on CentOS 7 with Ceph as a datastore and IPoIB as backend network

Introduction.

This article is exploring the process of installing HA OpenNebula and Ceph as datastore on three nodes (disks – 6xSSD 240GB, backend network IPoIB, OS CentOS 7) and using one additional node for backup.

Scheme of equipment below:
.

We are using this solution for virtualization of our imagery processing servers.

Preparing.

All actions should be performed on all nodes. For kosmo-arch all except bridge-utils and FrontEnd network.

yum install bridge-utils

FrontEnd network.

Configure bond0 (mode0) and start script below to create frontend interface for VMs (OpenNebula)

#!/bin/bash
Device=bond0
cd /etc/sysconfig/network-scripts
if [ ! -f ifcfg-nab1 ]; then
cp -p ifcfg-$Device bu-ifcfg-$Device
  echo -e "DEVICE=$Device\nTYPE=Ethernet\nBOOTPROTO=none\nNM_CONTROLLED=no\nONBOOT=yes\nBRIDGE=nab1" > ifcfg-$Device
    grep ^HW bu-ifcfg-$Device >> ifcfg-$Device
      echo -e "DEVICE=nab1\nNM_CONTROLLED=no\nONBOOT=yes\nTYPE=bridge" > ifcfg-nab1 
        egrep -v "^#|^DEV|^HWA|^TYP|^UUI|^NM_|^ONB" bu-ifcfg-$Device >> ifcfg-nab1
fi

BackEnd network. Configuration of IPoIB:

yum groupinstall -y "Infiniband Support"
yum install opensm

Enable IPoIB and switch infiniband to connected mode. This Link about differences of connected or datagram modes.

 cat /etc/rdma/rdma.conf
# Load IPoIB
IPOIB_LOAD=yes
# Setup connected mode
SET_IPOIB_CM=yes

Start Infiniband services.

systemctl enable rdma opensm
systemctl start rdma opensm

Check of working

ibv_devinfo

hca_id: mlx4_0
      transport:                      InfiniBand (0)
      fw_ver:                         2.7.000
      node_guid:                      0025:90ff:ff07:3368
      sys_image_guid:                 0025:90ff:ff07:336b
      vendor_id:                      0x02c9
      vendor_part_id:                 26428
      hw_ver:                         0xB0
      board_id:                       SM_1071000001000
      phys_port_cnt:                  2
              port:   1
                      state:                  PORT_ACTIVE (4)
                      max_mtu:                4096 (5)
                      active_mtu:             4096 (5)
                      sm_lid:                 8
                      port_lid:               4
                      port_lmc:               0x00
                      link_layer:             InfiniBand
              port:   2
                      state:                  PORT_ACTIVE (4)
                      max_mtu:                4096 (5)
                      active_mtu:             4096 (5)
                      sm_lid:                 4
                      port_lid:               9
                      port_lmc:               0x00
                      link_layer:             InfiniBand

and

iblinkinfo
CA: kosmo-virt1 mlx4_0:
    0x002590ffff073385     13    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2   10[  ] "Infiniscale-IV Mellanox Technologies" ( )
Switch: 0x0002c90200482d08 Infiniscale-IV Mellanox Technologies:
         2    1[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    2[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    1[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    4[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    5[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    6[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    7[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    8[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2    9[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   10[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>      13    1[  ] "kosmo-virt1 mlx4_0" ( )
         2   11[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       4    1[  ] "kosmo-virt2 mlx4_0" ( )
         2   12[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   13[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   14[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   15[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   16[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   17[  ] ==(                Down/ Polling)==>             [  ] "" ( )
         2   18[  ] ==(                Down/ Polling)==>             [  ] "" ( )
CA: kosmo-virt2 mlx4_0:
    0x002590ffff073369      4    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2   11[  ] "Infiniscale-IV Mellanox Technologies" ( )

Setup bond1 (mode1) of two IB interfaces. Set up IP 172.19.254.X where X is node number. Example below:

 cat /etc/modprobe.d/bonding.conf
 alias bond0 bonding
 alias bond1 bonding
 cat /etc/sysconfig/network-scripts/ifcfg-bond1
 DEVICE=bond1
 TYPE=bonding
 BOOTPROTO=static
 USERCTL=no
 ONBOOT=yes
 IPADDR=172.19.254.x
 NETMASK=255.255.255.0
 BONDING_OPTS="mode=1 miimon=500 primary=ib0"
 MTU=65520

Disable firewall

Tuning sysctl.

net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.rmem_default=16777216
net.core.wmem_default=16777216
net.core.optmem_max=16777216
net.ipv4.tcp_mem=16777216 16777216 16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216

Installing Ceph.

Preparation

Configure passwordless access between nodes for user root. The key shoud be created on one node and then copy to other to /root/.ssh/.

ssh-keygen -t dsa (creation of passwordless key)
cd /root/.ssh
cat id_dsa.pub >> authorized_keys
chown root.root authorized_keys
chmod 600 authorized_keys
echo "StrictHostKeyChecking no" > config

Disable Selinux on all nodes

In /etc/selinux/config
SELINUX=disabled

setenforce 0

Add max open files to /etc/security/limits.conf (depends on your requirements) on all nodes

  • hard nofile 1000000
  • soft nofile 1000000

Setup /etc/hosts on all nodes:

172.19.254.1 kosmo-virt1
172.19.254.2 kosmo-virt2
172.19.254.3 kosmo-virt3  
172.19.254.150 kosmo-arch
192.168.14.42 kosmo-virt1
192.168.14.43 kosmo-virt2
192.168.14.44 kosmo-virt3  
192.168.14.150 kosmo-arch

Installing

Install kernel >3.15 on all nodes (That is needed for using cephFS client)

rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml -y

Set up new kernel for booting.

grep ^menuentry /boot/grub2/grub.cfg 
grub2-set-default 0 # number of our kernel
grub2-editenv list
grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot.

Set up repository: (on all nodes)

 cat << EOT > /etc/yum.repos.d/ceph.repo
 [ceph]
 name=Ceph packages for $basearch
 baseurl=http://ceph.com/rpm/el7/$basearch
 enabled=1
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 
 [ceph-noarch]
 name=Ceph noarch packages
 baseurl=http://ceph.com/rpm/el7/noarch
 enabled=1
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 EOT

Import gpgkey: (on all nodes)

 rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'

Setup ntpd. (on all nodes)

yum install ntp

Editing /etc/ntp.conf and start ntpd. (on all nodes)

systemctl enable ntpd
systemctl start ntpd

Install: (on all nodes)

yum install libunwind -y
yum install -y  ceph-common ceph ceph-fuse ceph-deploy

Deploying.

(on kosmo-virt1) 
cd /etc/ceph
ceph-deploy new kosmo-virt1 kosmo-virt2 kosmo-virt3

MON deploying: (on kosmo-virt1)

ceph-deploy  mon create-initial

OSD deploying:

(on kosmo-virt1)

 cd /etc/ceph
 ceph-deploy gatherkeys kosmo-virt1
 ceph-deploy disk zap kosmo-virt1:sdb
 ceph-deploy osd prepare kosmo-virt1:sdb
 ceph-deploy disk zap kosmo-virt1:sdc
 ceph-deploy osd prepare kosmo-virt1:sdc
 ceph-deploy disk zap kosmo-virt1:sdd
 ceph-deploy osd prepare kosmo-virt1:sdd
 ceph-deploy disk zap kosmo-virt1:sde
 ceph-deploy osd prepare kosmo-virt1:sde
 ceph-deploy disk zap kosmo-virt1:sdf
 ceph-deploy osd prepare kosmo-virt1:sdf
 ceph-deploy disk zap kosmo-virt1:sdg
 ceph-deploy osd prepare kosmo-virt1:sdg

(on kosmo-virt2)

 cd /etc/ceph
 ceph-deploy gatherkeys kosmo-virt2
 ceph-deploy disk zap kosmo-virt2:sdb
 ceph-deploy osd prepare kosmo-virt2:sdb
 ceph-deploy disk zap kosmo-virt2:sdc
 ceph-deploy osd prepare kosmo-virt2:sdc
 ceph-deploy disk zap kosmo-virt2:sdd
 ceph-deploy osd prepare kosmo-virt2:sdd
 ceph-deploy disk zap kosmo-virt2:sde
 ceph-deploy osd prepare kosmo-virt2:sde
 ceph-deploy disk zap kosmo-virt2:sdf
 ceph-deploy osd prepare kosmo-virt2:sdf
 ceph-deploy disk zap kosmo-virt2:sdg
 ceph-deploy osd prepare kosmo-virt2:sdg

(on kosmo-virt3)

 cd /etc/ceph
 ceph-deploy gatherkeys kosmo-virt3
 ceph-deploy disk zap kosmo-virt3:sdb
 ceph-deploy osd prepare kosmo-virt3:sdb
 ceph-deploy disk zap kosmo-virt3:sdc
 ceph-deploy osd prepare kosmo-virt3:sdc
 ceph-deploy disk zap kosmo-virt3:sdd
 ceph-deploy osd prepare kosmo-virt3:sdd
 ceph-deploy disk zap kosmo-virt3:sde
 ceph-deploy osd prepare kosmo-virt3:sde
 ceph-deploy disk zap kosmo-virt3:sdf
 ceph-deploy osd prepare kosmo-virt3:sdf
 ceph-deploy disk zap kosmo-virt3:sdg
 ceph-deploy osd prepare kosmo-virt3:sdg

where sd[b-g] – SSD disks.

MDS deploying:

New giant version of ceph doesn’t have osd pool data and metadata
Use ceph osd lspools to check.

 ceph osd pool create data 1024
 ceph osd pool set data min_size 1
 ceph osd pool set data size 2
 ceph osd pool create metadata 1024
 ceph osd pool set metadata min_size 1
 ceph osd pool set metadata size 2

Check pool id of data and metadata with

 ceph osd lspools

Configure FS

 ceph mds newfs 4 3 --yes-i-really-mean-it

where 4 – id metadata pool, 3 – id metadata pool

Configure MDS

(on kosmo-virt1)

 cd /etc/ceph
 ceph-deploy mds create kosmo-virt1

(on kosmo-virt2)

 cd /etc/ceph
 ceph-deploy mds create kosmo-virt2

(on all nodes)

 chkconfig ceph on

Configure kosmo-arch.

Copy /etc/ceph.conf and /etc/ceph.client.admin.keyring from any of kosmo-virt to kosmo-arch

Preparing Ceph for OpenNebula.

Create pool:

 ceph osd pool create one 4096
 ceph osd pool set one min_size 1
 ceph osd pool set one size 2

Setup authorization to pool one:

 ceph auth get-or-create client.oneadmin mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=one' > /etc/ceph/ceph.client.oneadmin.keyring

Get key from keyring:

  cat /etc/ceph/ceph.client.oneadmin.keyring | grep key | awk '{print $3}' >>  /etc/ceph/oneadmin.key

Checking:

 ceph auth list

Copy /etc/ceph/ceph.client.oneadmin.keyring and /etc/ceph/oneadmin.key to the second node.

Preparing for Opennebula HA

Configuring MariaDB cluster

Configure MariaDB cluster on all nodes except kosmo-arch

Setup repo:

 cat << EOT > /etc/yum.repos.d/mariadb.repo
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/10.0/centos7-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1
 EOT

Install:

 yum install MariaDB-Galera-server MariaDB-client rsync galera

start service:

 service mysql start
 chkconfig mysql on
 mysql_secure_installation

prepare for cluster:

 mysql -p
 GRANT USAGE ON *.* to sst_user@'%' IDENTIFIED BY 'PASS';
 GRANT ALL PRIVILEGES on *.* to sst_user@'%';
 FLUSH PRIVILEGES;
 exit
 service mysql stop

configuring cluster: (for kosmo-virt1)

 cat << EOT > /etc/my.cnf
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 binlog_format=ROW
 default-storage-engine=innodb
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1
 query_cache_size=0
 query_cache_type=0
 bind-address=0.0.0.0
 datadir=/var/lib/mysql
 innodb_log_file_size=100M
 innodb_file_per_table
 innodb_flush_log_at_trx_commit=2
 wsrep_provider=/usr/lib64/galera/libgalera_smm.so
 wsrep_cluster_address="gcomm://172.19.254.1,172.19.254.2,172.19.254.3"
 wsrep_cluster_name='scanex_galera_cluster'
 wsrep_node_address='172.19.254.1' # setup real node ip
 wsrep_node_name='kosmo-virt1' #  setup real node name
 wsrep_sst_method=rsync
 wsrep_sst_auth=sst_user:PASS
 EOT

(for kosmo-virt2)

 cat << EOT > /etc/my.cnf
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 binlog_format=ROW
 default-storage-engine=innodb
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1
 query_cache_size=0
 query_cache_type=0
 bind-address=0.0.0.0
 datadir=/var/lib/mysql
 innodb_log_file_size=100M
 innodb_file_per_table
 innodb_flush_log_at_trx_commit=2
 wsrep_provider=/usr/lib64/galera/libgalera_smm.so
 wsrep_cluster_address="gcomm://172.19.254.1,172.19.254.2"
 wsrep_cluster_name='scanex_galera_cluster'
 wsrep_node_address='172.19.254.2' # setup real node ip
 wsrep_node_name='kosmo-virt2' #  setup real node name
 wsrep_sst_method=rsync
 wsrep_sst_auth=sst_user:PASS
 EOT

(for kosmo-virt3)

 cat << EOT > /etc/my.cnf
 collation-server = utf8_general_ci
 init-connect = 'SET NAMES utf8'
 character-set-server = utf8
 binlog_format=ROW
 default-storage-engine=innodb
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1
 query_cache_size=0
 query_cache_type=0
 bind-address=0.0.0.0
 datadir=/var/lib/mysql
 innodb_log_file_size=100M
 innodb_file_per_table
 innodb_flush_log_at_trx_commit=2
 wsrep_provider=/usr/lib64/galera/libgalera_smm.so
 wsrep_cluster_address="gcomm://172.19.254.1,172.19.254.2,172.19.254.3"
 wsrep_cluster_name='scanex_galera_cluster'
 wsrep_node_address='172.19.254.2' # setup real node ip
 wsrep_node_name='kosmo-virt3' #  setup real node name
 wsrep_sst_method=rsync
 wsrep_sst_auth=sst_user:PASS
 EOT

(on kosmo-virt1)

 /etc/init.d/mysql start --wsrep-new-cluster

(on kosmo-virt2)

 /etc/init.d/mysql start

(on kosmo-virt3)

 /etc/init.d/mysql start

check on all nodes:

 mysql -p
 show status like 'wsrep%';

| Variable_name | Value | +——————————+————————————–+

wsrep_local_state_uuid 739895d5-d6de-11e4-87f6-3a3244f26574
wsrep_protocol_version 7
wsrep_last_committed 0
wsrep_replicated 0
wsrep_replicated_bytes 0
wsrep_repl_keys 0
wsrep_repl_keys_bytes 0
wsrep_repl_data_bytes 0
wsrep_repl_other_bytes 0
wsrep_received 6
wsrep_received_bytes 425
wsrep_local_commits 0
wsrep_local_cert_failures 0
wsrep_local_replays 0
wsrep_local_send_queue 0
wsrep_local_send_queue_max 1
wsrep_local_send_queue_min 0
wsrep_local_send_queue_avg 0.000000
wsrep_local_recv_queue 0
wsrep_local_recv_queue_max 1
wsrep_local_recv_queue_min 0
wsrep_local_recv_queue_avg 0.000000
wsrep_local_cached_downto 18446744073709551615
wsrep_flow_control_paused_ns 0
wsrep_flow_control_paused 0.000000
wsrep_flow_control_sent 0
wsrep_flow_control_recv 0
wsrep_cert_deps_distance 0.000000
wsrep_apply_oooe 0.000000
wsrep_apply_oool 0.000000
wsrep_apply_window 0.000000
wsrep_commit_oooe 0.000000
wsrep_commit_oool 0.000000
wsrep_commit_window 0.000000
wsrep_local_state 4
wsrep_local_state_comment Synced
wsrep_cert_index_size 0
wsrep_causal_reads 0
wsrep_cert_interval 0.000000
wsrep_incoming_addresses 172.19.254.1:3306,172.19.254.3:3306,172.19.254.2:3306
wsrep_evs_delayed
wsrep_evs_evict_list
wsrep_evs_repl_latency 0/0/0/0/0
wsrep_evs_state OPERATIONAL
wsrep_gcomm_uuid 7397d6d6-d6de-11e4-a515-d3302a8c2342
wsrep_cluster_conf_id 2
wsrep_cluster_size 2
wsrep_cluster_state_uuid 739895d5-d6de-11e4-87f6-3a3244f26574
wsrep_cluster_status Primary
wsrep_connected ON
wsrep_local_bf_aborts 0
wsrep_local_index 0
wsrep_provider_name Galera
wsrep_provider_vendor Codership Oy info@codership.com
wsrep_provider_version 25.3.9(r3387)
wsrep_ready ON
wsrep_thread_count 2

+——————————+————————————–+

Creating user and database:

mysql -p
create database opennebula;
GRANT USAGE ON opennebula.* to oneadmin@'%' IDENTIFIED BY 'PASS';
GRANT ALL PRIVILEGES on opennebula.* to oneadmin@'%';
FLUSH PRIVILEGES;

Remember, if all nodes will be down, actual node must be started with /etc/init.d/mysql start –wsrep-new-cluster. You should find an actual node. If you start node with not actual view, other nodes will issue error (see logs) – [ERROR] WSREP: gcs/src/gcs_group.cpp:void group_post_state_exchange(gcs_group_t*)():319: Reversing history: 0 → 0, this member has applied 140536161751824 more events than the primary component.Data loss is possible. Aborting.

Configuring HA cluster

Unfortunately pcs cluster conflicts with Opennebula server. That’s why will go with pacemaker,corosync and crmsh.

Installing HA

Set up repo on all nodes except kosmo-arch:

 cat << EOT > /etc/yum.repos.d/network\:ha-clustering\:Stable.repo
 [network_ha-clustering_Stable]
 name=Stable High Availability/Clustering packages (CentOS_CentOS-7)
 type=rpm-md
 baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/
 gpgcheck=1
 gpgkey=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/repodata/repomd.xml.key
 enabled=1
 EOT

Install on all nodes except kosmo-arch:

 yum install corosync pacemaker crmsh resource-agents -y

On kosmo-virt1 create configuration

 vi /etc/corosync/corosync.conf
 totem {
 version: 2  
 secauth: off
 cluster_name: cluster
 transport: udpu
 }
 nodelist {
 node {
      ring0_addr: kosmo-virt1
      nodeid: 1
     }
 node {
      ring0_addr: kosmo-virt2
      nodeid: 2
     }
  node {
      ring0_addr: kosmo-virt3
      nodeid: 3
     }
 }
 quorum {
 provider: corosync_votequorum
 }
 logging {
 to_syslog: yes
 }

and create authkey on kosmo-virt1

 cd /etc/corosync
 corosync-keygen

Copy corosync and authkey to kosmo-virt2 and kosmo-virt3

Enabling (on all nodes except kosmo-arch):

 systemctl enable pacemaker corosync

Starting (on all nodes except kosmo-arch):

 systemctl start pacemaker corosync

Checking:

 crm status
 
 Last updated: Mon Mar 30 18:33:14 2015
 Last change: Mon Mar 30 18:23:47 2015 via crmd on kosmo-virt2
 Stack: corosync
 Current DC: kosmo-virt2 (2) - partition with quorum
 Version: 1.1.10-32.el7_0.1-368c726
 3 Nodes configured
 0 Resources configured
 Online: [ kosmo-virt1 kosmo-virt2 kosmo-virt3]

add properies

crm configure property stonith-enabled=false
crm configure property no-quorum-policy=stop

Installing Opennebula

Installing

Setup repo on all nodes except kosmo-arch:

 cat << EOT > /etc/yum.repos.d/opennebula.repo
 [opennebula]
 name=opennebula
 baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/
 enabled=1
 gpgcheck=0
 EOT

Installing (on all nodes except kosmo-arch):

 yum install -y opennebula-server opennebula-sunstone opennebula-node-kvm qemu-img qemu-kvm

Ruby Runtime Installation:

 /usr/share/one/install_gems

Change password oneadmin:

 passwd oneadmin

Create passworless access for oneadmin (on kosmo-virt1):

 su oneadmin
 cd ~/.ssh
 ssh-keygen -t dsa
 cat id_dsa.pub >> authorized_keys
 chown oneadmin:oneadmin authorized_keys
 chmod 600 authorized_keys
 echo "StrictHostKeyChecking no" > config

Copy to other nodes (remember that oneadmin home directory is /var/lib/one).

Change listen for sunstone-server (on all nodes):

 sed -i 's/host:\ 127\.0\.0\.1/host:\ 0\.0\.0\.0/g' /etc/one/sunstone-server.conf

on kosmo-virt1:

copy all /var/lib/one/.one/*.auth and one.key files to OTHER_NODES:/var/lib/one/.one/

Start stop services on kosmo-virt1:

 
 systemctl start opennebula opennebula-sunstone

Try to connect to http://node:9869.
Check logs for errors (/var/log/one/oned.log /var/log/one/sched.log /var/log/one/sunstone.log).
If no errors:

 systemctl stop opennebula opennebula-sunstone

Add ceph support for qemu-kvm for all nodes except kosmo-arch

 qemu-img -h | grep rbd
 /usr/libexec/qemu-kvm --drive format=? | grep rbd

if there is no rbd support than you have to compile and install:

 qemu-kvm-rhev
 qemu-kvm-common-rhev 
 qemu-img-rhev

Download:

 yum groupinstall -y "Development Tools"
 yum install -y yum-utils rpm-build
 yumdownloader --source qemu-kvm
 rpm -ivh qemu-kvm-1.5.3-60.el7_0.11.src.rpm

Compiling.

 
 cd ~/rpmbuild/SPEC
 vi qemu-kvm.spec

Change %define rhev 0 to %define rhev 1.

 rpmbuild -ba qemu-kvm.spec

Installing (for all nodes except kosmo-arch).

 rpm -e --nodeps libcacard-1.5.3-60.el7_0.11.x86_64
 rpm -e --nodeps qemu-img-1.5.3-60.el7_0.11.x86_64
 rpm -e --nodeps qemu-kvm-common-1.5.3-60.el7_0.11.x86_64
 rpm -e --nodeps qemu-kvm-1.5.3-60.el7_0.11.x86_64
 rpm -ivh libcacard-rhev-1.5.3-60.el7.centos.11.x86_64.rpm
 rpm -ivh qemu-img-rhev-1.5.3-60.el7.centos.11.x86_64.rpm
 rpm -ivh qemu-kvm-common-rhev-1.5.3-60.el7.centos.11.x86_64.rpm
 rpm -ivh qemu-kvm-rhev-1.5.3-60.el7.centos.11.x86_64.rpm

Check for ceph support.

 qemu-img -h | grep rbd
 Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd iscsi gluster gluster gluster gluster dmg cow cloop bochs blkverify    blkdebug
 /usr/libexec/qemu-kvm --drive format=? | grep rbd
 Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd iscsi gluster gluster gluster gluster dmg cow cloop bochs blkverify blkdebug

Try to write image (for all nodes except kosmo-arch):

 qemu-img create -f rbd rbd:one/test-virtN 10G

where N node number.

Add ceph support for libvirt

On all nodes:

 systemctl enable messagebus.service
 systemctl start messagebus.service
 systemctl enable libvirtd.service
 systemctl start libvirtd.service

On kosmo-virt1 create uuid:

 uuidgen
 cfb34c4b-d95c-4abc-a4cc-f8a2ae532cb5

Create secret.xml

 
 cat > secret.xml <<EOF
 <secret ephemeral='no' private='no'>
 <uuid>cfb34c4b-d95c-4abc-a4cc-f8a2ae532cb5</uuid>
 <usage type='ceph'>
 <name>client.oneadmin AQDp1aqz+JPAJhAAIcKf/Of0JfpJRQvfPLqn9Q==</name>
 </usage>
 </secret>
 EOF

Where AQDp1aqz+JPAJhAAIcKf/Of0JfpJRQvfPLqn9Q== is cat /etc/ceph/oneadmin.key.
Copy secret.xml to other nodes.

Add key to libvirt (for all nodes except kosmo-arch)

 virsh secret-define --file secret.xml
 virsh secret-set-value --secret virsh secret-set-value --base64 $(cat /etc/ceph/oneadmin.key)

check

 virsh secret-list
 UUID                                 Usage
 -----------------------------------------------------------
 cfb34c4b-d95c-4abc-a4cc-f8a2ae532cb5 ceph client.oneadmin AQDp1aqz+JPAJhAAIcKf/Of0JfpJRQvfPLqn9Q==

Restart libvirtd:

 systemctl restart libvirtd.service

Convering database to mysql:

Downloading script:

 wget http://www.redmine.org/attachments/download/6239/sqlite3-to-mysql.py

Converting:

 sqlite3 /var/lib/one/one.db .dump | ./sqlite3-to-mysql.py > mysql.sql   
 mysql -u oneadmin -p opennebula < mysql.sql

Change /etc/one/oned.conf from

 DB = [ backend = "sqlite" ]

to

 DB = [ backend = "mysql",
      server  = "localhost",
      port    = 0,
      user    = "oneadmin",
      passwd  = "PASS",
      db_name = "opennebula" ]

Copy oned.conf to other nodes as root except kosmo-arch.

Check kosmo-virt2 and kosmo-virt3 nodes in turn:

   systemctl start opennebula opennebula-sunstone

check logs for errors (/var/log/one/oned.log /var/log/one/sched.log /var/log/one/sunstone.log)

   systemctl start opennebula opennebula-sunstone

Creating HA resources

On all nodes except kosmo-arch:

 systemctl disable opennebula opennebula-sunstone opennebula-novnc

From any of the nodes except kosmo-arch:

 crm
 configure
 primitive ClusterIP ocf:heartbeat:IPaddr2 params ip="192.168.14.41" cidr_netmask="24" op monitor interval="30s"
 primitive opennebula_p systemd:opennebula \
 op monitor interval=60s timeout=20s \
 op start interval="0" timeout="120s" \
 op stop  interval="0" timeout="120s" 
 primitive opennebula-sunstone_p systemd:opennebula-sunstone \
 op monitor interval=60s timeout=20s \
 op start interval="0" timeout="120s" \
 op stop  interval="0" timeout="120s" 
 primitive opennebula-novnc_p systemd:opennebula-novnc \
 op monitor interval=60s timeout=20s \
 op start interval="0" timeout="120s" \
 op stop  interval="0" timeout="120s" 
 group Opennebula_HA ClusterIP opennebula_p opennebula-sunstone_p  opennebula-novnc_p
 exit

Check

 crm status
 Last updated: Tue Mar 31 16:43:00 2015
 Last change: Tue Mar 31 16:40:22 2015 via cibadmin on kosmo-virt1
 Stack: corosync
 Current DC: kosmo-virt2 (2) - partition with quorum
 Version: 1.1.10-32.el7_0.1-368c726
 3 Nodes configured
 4 Resources configured
 Online: [ kosmo-virt1 kosmo-virt2 kosmo-virt3 ]
 Resource Group: Opennebula_HA
   ClusterIP  (ocf::heartbeat:IPaddr2):       Started kosmo-virt1
   opennebula_p       (systemd:opennebula):   Started kosmo-virt1
   opennebula-sunstone_p      (systemd:opennebula-sunstone):  Started kosmo-virt1
   opennebula-novnc_p (systemd:opennebula-novnc):     Started kosmo-virt1

Configuring OpenNebula

http://active_node:9869 – web management.

With web management. 1. Create Cluster. 2. Add hosts (using 192.168.14.0 networks).

Console management.

3. Add net. (su oneadmin)

 
 cat << EOT > def.net
 NAME    = "Shared LAN"
 TYPE    = RANGED
 # Now we'll use the host private network (physical)
 BRIDGE  = nab0
 NETWORK_SIZE    = C
 NETWORK_ADDRESS = 192.168.14.0
 EOT
 onevnet create def.net

4. Create image rbd datastore. (su oneadmin)

 cat << EOT > rbd.conf
 NAME = "cephds"
 DS_MAD = ceph
 TM_MAD = ceph
 DISK_TYPE = RBD
 POOL_NAME = one
 BRIDGE_LIST ="192.168.14.42 192.168.14.43 192.168.14.44"
 CEPH_HOST ="172.19.254.1:6789 172.19.254.2:6789 172.19.254.3:6789"
 CEPH_SECRET ="cfb34c4b-d95c-4abc-a4cc-f8a2ae532cb5" #uuid key, looked at libvirt authentication for ceph
 CEPH_USER = oneadmin
 onedatastore create rbd.conf

5. Create system ceph datastore.

check last id number – N.

onedatastore list

on all nodes create directory and mount ceph

mkdir /var/lib/one/datastores/N+1
echo "172.19.254.K:6789:/ /var/lib/one/datastores/N+1 ceph rw,relatime,name=admin,secret=AQB4jxJV8PuhJhAAdsdsdRBkSFrtr0VvnQNljBw==,nodcache 0 0 # see secret in /etc/ceph/ceph.client.admin.keyring" >> /etc/fstab
mount /var/lib/one/datastores/N+1

where K= IP of curent node.

From one node change permitions:

chown oneadmin:oneadmin /var/lib/one/datastores/N+1

Create system ceph datastore (su oneadmin):

 cat << EOT > sys_fs.conf
 NAME    = system_ceph
 TM_MAD  = shared
 TYPE    = SYSTEM_DS
 EOT

 onedatastore create sys_fs.conf

6. Add nodes, vnets, datastories to created cluster with web management.

HA VM

Here is official doc.
But one comment. I’m using migrate instead of recreate command.

 /etc/one/oned.conf
 HOST_HOOK = [
  name      = "error",
  on        = "ERROR",
  command   = "host_error.rb",
  arguments = "$HID -m",
  remote    = no ]

BACKUP

Some words about backup.

Use persistent image type for this work scheme.

For BACKUP was used a single Linux server kosmo-arch (ceph client) with installed zfs on linux. For zpool set ZFS and deduplication on. (Remember that deduplication required about 2GB mem for 1TB storage space.)

Example of simple script that is starting by cron:

#!/bin/sh
currdate=`/bin/date +%Y-%m-%0e`
olddate=`/bin/date --date="60 days ago" +%Y-%m-%0e`
imagelist="one-21" #space delimited list
for i in $imagelist
do
snapcurchk=`/usr/bin/rbd -p one ls | grep $i | grep $currdate`
snapoldchk=`/usr/bin/rbd -p one ls | grep $i | grep $currdate`
if test -z "$snapcurchk"
 then
  /usr/bin/rbd snap create --snap $currdate one/$i
  /usr/bin/rbd export one/$i@$currdate /rbdback/$i-$currdate
 else
  echo "current snapshot exist" 
fi
if test -z "$snapoldchk"
  then
   echo "old snapshot doesn't exist"
  else
  /usr/bin/rbd snap rm one/$i@$olddate
  /bin/rm -f /rbdback/$i-$olddate
 fi
done


Use onevm utility or web-interface (see template) to know which image assigned to VM.

onevm list
onevm show "VM_ID" -a | grep IMAGE_ID

PS

Don’t forget to change storage driver for VM to vda.(Drivers for windows). Without that you will face with low IO performance. (no more than 100 MB/s).
I saw 415MB/s with virtio drivers.

Links.

Maintenance Release – OpenNebula Cotton Candy 4.12.1

The OpenNebula team is proud to announce a new maintenance release of OpenNebula 4.12.1 Cotton Candy. This release comes with several bug fixes found after the 4.12 release. These bug fixes covers different OpenNebula components, like for instance the scheduler, the Cloud View self service portal, Sunstone web interface, OpenNebula Core and several drivers (VM, Auth, Network). Check the full list of bug fixes in the development portal.

Besides the bug fixes mentioned above, 4.12.1 includes several improvements:

If you haven’t had the chance so far to try OpenNebula 4.12, now is the time to download and install OpenNebula 4.12.1 Cotton Candy. As as highlight, find below the newly showback feature, which enables the generation of cost reports that can be integrated with chargeback and billing platforms:

OpenNebula Conf Call for Speakers Deadline Extended, April 15th

Due to the number of requests for extending the Call for Speakers we have moved the deadline to April 15th.  Speakers will receive free admission, which includes:

  • Attendance at all conference presentations
  • Attendance at pre-conference tutorials and hacking sessions
  • Coffee break during the morning and afternoon breaks
  • Lunch on both conference days
  • Dinner event on the first conference day
  • Tapas dinner on the pre-conference day
  • WiFi access
  • … and the opportunity to address a large audience of talented and influential cloud and open-source experts!

The third ever OpenNebula International Conference, will be held in Barcelona from the 20th to the 22nd of October 2015. As you may already know previous editions were a total success, with useful OpenNebula experiences masterly portrayed by people from Akamai, Produban -Santander Group-, BBC, FermiLab, ESA, Deloitte, CentOS, and many others.

Should you be interested, we would like to ask you to fill the Session Proposal Form before April 15th.

See you in Barcelona!

bcn_conference

OpenNebula Newsletter – March 2015

This Newsletter is intended to OpenNebula users, developers and members of the community, and compiles the highlights of the OpenNebula project during this last month and what are the planned actions for the upcoming months.

Technology

The OpenNebula team released this month the latest stable release, 4.12 Cotton Candy. This is a stable release and so a recommended update for all production deployments. Cotton Candy comes with several improvements in different subsystems and components. OpenNebula is now able to generate cost reports that can be integrated with chargeback and billing platforms, and also presented to both the administrators and the end users.

Moreover Virtual Datacenters have been redefined as a new kind of OpenNebula resource. Making VDCs a separate resource has several advantages, for instance they can have one or more Groups added to them. This gives the Cloud Admin greater resource assignment flexibility.

Other perks of upgrading your installation to 4.12 include SPICE support, the excellent addition of Security Groups -allowing administrators to define the firewall rules and apply them to the Virtual Machines-, support for VXLAN, huge improvements in vCenter -import running VMs, network management, new vCenter cloud view, VM contextualization support, etc -,system datastore flushing, and many more minor features and important bugfixes. As usual, the migration path has been thoroughly designed and tested so updating to Cotton Candy from previous versions is a breeze. No excuses then for not bringing your OpenNebula to the latest state of the art in cloud management platforms!
Also this month a new release of vOneCloud, 1.2.1, the open replacement for VMware vCloud, was made available to the general public, meaning that all users without an active support subscription are able to upgrade through the Control Panel with a single click. If you are using vOneCloud 1.2, take this chance to get an improved version, including VLAN support through Sunstone, notifications of new releases, better log display at the Control Panel and more. And if you are still not using vOneCloud give it a try! We’ve packed 1.2.1 in an OVA for your convenience, just to keep you without excuses again :)

Community

We certainly love our community, and it seems like you love us back! We are very proud of having Runtastic among our users, and when they explain why they chose us, we feel elated. They started with OpenNebula using the virtualization management features, and are continuously evolving towards more cloudy features. Way to go!

Our newly-launched machines are automatically included into Chef, and start doing their work within a minute

Good to know that OpenNebula clouds are expanding, this means they are healthy clouds. Like this one by bpsNode, expanding to Miami and Dallas. We are also excited by awesome user stories like this one featuring Altus IT with Lenovo delivering IT infrastructures in Croatia using OpenNebula. Way to go!

How good is your Russian? If you are fluent, enjoy the reasons of why Yuterra chose OpenNebula and Ceph for its Private Cloud. Moreover, fluent in German as well? Check out this OpenNebula webinar then.

Spreading the word is also something we deeply value from our community, hence we want to welcome the newly born Barcelona User Group! If you are in Barcelona, check it out, you won’t be disappointed. Also,we have like this example in FOSSAsia. And it is quite funny too, do not miss.

We love also this kind of feedback, how OpenNebula plays nice with other components in the ecosystem. Keeping our marketplace healthy and up to date is also kudos for the community, like this addition of ArchLinux to the catalog. Thanks!

A big thanks as well to all those members of the community that make possible to have a multi language Sunstone. This really foster adoption and we never could have done it without you! And last, but not least, it is very gratifying to see how OpenNebula helps build robust products like this one.

Outreach

After the second edition of the OpenNebula Conference, we are already preparing for the upcoming, third edition in Barcelona, October 2015. Interested? You are still in time for getting a good price deal for tickets. If you want to share your OpenNebula experiences, the call for papers is open as well until the end of this month, so the clock is ticking, do not miss the chance!. Also, your company may be interested in the sponsorship opportunities for OpenNebulaConf 2015.

This last month, the OpenNebula project proudly sponsored a corner of the Open Cloud & Developer Park at the Cloud Expo Europe. During two intense days, members of the team gave several talks about the OpenNebula philosophy, design and features in the Park’s theatre. Also on board in the OpenNebula corner, our partners from CoudWeavers showed how OpenNebula do everything it does with a minimal footprint. The guys from viApps also did not missed the opportunity to be in a pod in the corner to tell the attendees about their integration and added value. OpenNebula Systems, the company behind OpenNebula, was also present in their own pod presenting vOneCloud, the product that turns your vCenter infrastructure into a private cloud. Also, Runtastic introduced us to the reasons why they chose OpenNebula over other Cloud Management Platforms, to build a cloud serving 50 million users. Impressive!

opennebulacorner.jpg-large

The TechDay in Prague was a total success, with a full house with high participating attendees and lots of juicy feedback. We plan to follow with other cities including Chicago, Dallas and Dublin. Send us an email or send it to the community discuss mailing list if you are interested in hosting a TechDay event.

CA7o-w2UQAAioUA

As you may know, OpenNebula is participating in the BEACON project, flagship European project in federated cloud networking, due to this members of the team traveled to Brussels for the NetFutures15, to find synergies with other research projects.

During the following months, members of the OpenNebula team will be speaking in the following events:

If you are interested in receiving OpenNebula training, check the schedule for 2015 public classes at OpenNebula Headquarters. Please contact us if your would like to request training near you.

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.

Barcelona Opennebula User Group

Skyline-Barcelona211

As you know, the community of OpenNebula is an important pillar for the project. Opennebula community through the distribution lists and forums can express their questions, requests, or contribute with new ideas to the developers. This information is very useful and can contribute by helping other users or develop new features.

However, OpenNebula project thought in User Groups too. The OpenNebula User Groups are local communities, where users can discuss or share information and experiences in a more direct way across ‘town’. Getting a closer diffusion, and joining people who want to collaborate with the project.

Also, remember that this year (2015) the Opennebula annual conference travels from Berlin to Barcelona, ​​the ‘smartcity’ that will be the meeting point where developers, users, administrators, researchers, … can share experiences, case studies, etc.

bcn_conference

For these reasons, some cloudadmins of Barcelona area have decided to create the Barcelona OpenNebula User Group. This group aims to be a small-scale community where we can discuss and find common objectives that support the project. We have created a website and a Google group where we will inform about first steps and work together in common goals.

In addition, and inside ONEBCN usergroup official presentations tour we will be next 5th of May on sudoers, a sysadmins group that meets regularly at the North Campus of the UPC.

It is a totally open group, so you are welcome!  First members of the Group:

Oriol Martí gabriel-verdejo-380x303 Angel Galindo Muñoz Xavier Peralta Ramos Jordi Guijarro Juan José Fuentes Miguel Ángel Flores Alex Vaqué

Some interesting links:

Cloudadmins Community Blog – http://www.cloudadmins.org

OneBCN Google Group – https://groups.google.com/forum/embed/?place=forum%2Fopennebula-barcelona-usergroup

Sudoers Barcelona – http://sudoers-barcelona.wikia.com/wiki/Sudoers_Barcelona_Wiki

vOneCloud 1.2.1 is Out!

A new version of vOneCloud, 1.2.1, has been released. This is an update to the previous stable version, 1.2, and it is an open release to the general public, meaning that you don’t need an active support subscription to access this upgrade.

This update is therefore available from the Control Panel with a single click. The Control Panel component will, behind the scenes:

  • Download the new vOneCloud packages
  • Install the new vOneCloud packages, keeping the existing configuration
  • Restart the OpenNebula service, with no downtime whatsoever to the currently running virtual machines

After the upgrade is performed, vOneCloud services would be up and running and updated to the latest version, which includes the following improvements:

  • Display logs in the Control Panel
  • Sunstone notifies the administrator user when there is a new release
  • Information of the newly available releases in the Control Panel
  • Better VLAN tagged Network handling in Sunstone

If you don’t have currently a running instance of vOneCloud, you can download an OVA with 1.2.1 already installed, you will need only to register in the vOneCloud support portal and visit this article.

Relevant Links

OpenNebula Conf 2015: Call for Speakers Reminder

As you may already know, this year OpenNebula Conf is taking place in Barcelona, Spain, on October 20-22. If you want to participate in this event and you have not submitted your talk yet, you have until March 31.

If you want to get an idea of the past OpenNebulaConf sessions, including talks from companies such as CentOS, Runtastic, Puppet Labs, Cloudweavers, RedHat, Deutsche Post, please check ourYoutube channel or download the presentations from our SlideShare account

Also we would like to remind you that the tickets are already available and if you buy your ticket before June 15th, you get the best discount of the year.

If you are interested on sponsoring this event, check out our sponsoring opportunities

Hope to see you there

promo_banner

Why Did We Choose OpenNebula for Runtastic?

Link to the original article at TheStack.com

runtastic-opennebula

Armin Deliomini is a Linux, virtualisation and database engineer at Austrian-based mobile fitness company Runtastic, and has made a rewarding journey from commercial cloud solutions – such as VMware and Oracle – in favour of completely open-source alternatives. Over the last two years Armin has implemented a private ecostructure for the Runtastic ecosystem and its 50 million users. Armin will be speaking today at Cloud Expo Europe taking place alongside Data Centre World, this week on the 11th and 12th March.

Since 2009 Runtastic has created apps, products and services for health and fitness tracking and management – a powerful infrastructure including around 300 virtual machines on thirty OpenNebula nodes, ensuring that 100 million downloaded apps and around 50 million registered users can access our services at any time.

We didn’t have a lot of time to decide on a technology to run our virtual environment. We had the classic vSphere environment in mind, but building an environment completely around opensource software spoke against a commercial virtual solution. We tested Ovirt, Proxmox and Openstack, the latter of which was very close, since we use Ubuntu in our overall infrastructure, and it was the most-hyped Opensource cloud solution at that time. A meeting with Tino Vasquez at the Netways booth at Cebit 2013 convinced us that OpenNebula was at least worth a thought.We set up a test installation; four months later our first production-grade OpenNebula cluster was fired up.

So why did we choose OpenNebula? Firstly, we liked the flexibility; in our business we don’t know exactly where the road ahead is leading. We had to find a technology that would grow with our needs and that we could adapt easily. We came at that time from a classic Virtualization background, with Vmware vSphere, and that was how we started with OpenNebula – classic virtual machines running on a hypervisor cluster that was managed by a central piece of software. But we also knew that this was not the future. OpenNebula gave us the comfort to start in a well known way, but at the same time gave us room to evolve.Our first set-up consisted of 16 KVM hosts and a Netapp storage serving NFS. In the beginning OpenNebula didn’t do more than give us an interface to start and stop machines, and to change their resource settings; but over time the situation developed very favourably. Our newly-launched machines are automatically included into Chef, and start doing their work within a minute. We also have the possibilty to start machines in external clouds in the event of resource shortages.

Our current projects are a new Cisco UCS Blade infrastructure operating as OpenNebula nodes, to lift our compute power to ~ 1000 cores, and Ceph as a future storage backend – a successor to our two NFS storages. We set up our first Ceph cluster in our preproduction environment recently.

We are no experts on OpenNebula, but then, we don’t have to be. It simply works…

OpenNebula 4.12 Cotton Candy is Out!

The OpenNebula team is pleased to announce the immediate availability of the final version of OpenNebula 4.12, codename Cotton Candy. This release ships with several improvements in different subsystems and components. For the first time, OpenNebula will be able to generate cost reports that can be integrated with chargeback and billing platforms, and also presented to both the administrators and the end users. Each VM Template defined by the Cloud administrator can define a cost per cpu and per memory per hour.

vdcadmin_vdc_showback

Starting with Cotton Candy, Virtual Datacenters are a new kind of OpenNebula resource with its own ID, name, etc. and the term Resource Provider disappears. Making VDCs a separate resource has several advantages over the previous Group/VDC concept, since they can have one or more Groups added to them. This gives the Cloud Admin greater resource assignment flexibility.

In addition to the well known VNC support in Sunstone, OpenNebula 4.12 includes support to interact with Virtual Machines using the SPICE protocol. This feature can be enabled for any Virtual Machine just checking the option in the input/output section of the Template creation form.

Networking has been vastly improved in 4.12, with the addition of Security Groups, allowing administrators to define the firewall rules and apply them to the Virtual Machines. Also, Virtual Extensible LAN (VXLAN) is a network virtualization technology aimed to solve large cloud deployments problems, encapsulating Ethernet frames within UDP packets, and thus solving the 4096 VLAN limit problem. Cotton Candy is fully capable of managing VXLANs using the linux kernel integration.

Important new features related to the newly introduced vCenter support are available in OpenNebula 4.12: the ability to import running VMs and networks, including the attach/detach NIC functionality, a new cloud view tailored for vCenter, VM contextualization support and reacquire VM Templates with their logo and description.

Finally, several improvements are scattered across every other OpenNebula component: the possibility to flush and disable a system datastore, improvements in Sunstone for better user workflow, and many other bugfixes that stabilized features introduced in Fox Fur.

As usual OpenNebula releases are named after a Nebula. The Cotton Candy Nebula (IRAS 17150-3224) is located in the constellation of Ara.

This is a stable release and so a recommended update. It incorporate important improvement since 4.10 and several bug fixes since 4.12 Beta. Be sure to check the compatibility and upgrade guides. We invite you to download it and to check the QuickStart guides, as well as to browse the documentation, which has also been properly updated.

Security Groups were funded by BlackBerry, and network extensions to the vCenter driver by Echelon, in the context of the Fund a Feature Program.

Thanks the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation.

More information

OpenNebula Newsletter – January / February 2015

We want to let you know about what we are up to with the main news from the last two months regarding the OpenNebula project, including what you can expect in the following months.

Technology

The OpenNebula team released OpenNebula 4.10.2 last month. There were several affected components, ranging from drivers to the OpenNebula core as well as different Sunstone views. On the security side, a vulnerability in the xmlrpc server was patched, thanks to Dennis Felsch and Christian Mainka for reporting it. Many other bugfixes and minor improvements were made to your favourite CMP, check the complete changelog here. We would like to thank Echelon for making vCenter networking support possible in OpenNebula 4.10.2 through the Fund a Feature program.

Aiming at not missing a beat (not ever!, we just reached the 10 thousand commits mark in OpenNebula), the team also released recently the beta version of OpenNebula 4.12. you can check bugfixes and new features here. We think you will enjoy the new additions, specially the SPICE support in Sunstone, as well as the Virtual Data Center redesign. Also, showback capabilities are included in 4.12, OpenNebula will report resource usage cost, enabling the integration with chargeback and billing platforms, the possibility to flush and disable a system datastore, the introduction of Security Groups, allowing administrators to define the firewall rules and apply them to the Virtual Machines, the ability to use VXLANs in your OpenNebula infrastructure, and many more. Moreover, important new features related to the newly introduced vCenter support are available in OpenNebula 4.12: ability to import running VMs and networks, including the attach/detach NIC functionality, a new cloud view tailored for vCenter, VM contextualization support and reacquire VM Templates with their logo and description.

showback_cloudview
If you ever wondered how to build a network overlay between two OpenNebula sites, or between your OpenNebula powered datacenter and any of the supported public clouds (Amazon EC2, MS Azure, IBM SoftLayer), then you are in for a treat. OpenNebula is going to participate in BEACON, the flagship european project bringing SDN and NFC advances to federatd cloud networking. The Project is set to pave the road towards the true start of a revolution in cloud networking, developing the building blocks to enable next generation network functionalities within and across data centers and cloud sites. This will foster the integration of OpenNebula and the SDN OpenDaylight, which we believe are very good news. It is also in line with the results of the European Commission Workshop on Global Cloud Experimental Facilities, placing OpenNebula on networking into the cloud.

But OpenNebula 4.12 is not the only important release of 2015 to date, vOneCloud 1.2 release also hit the road recently! If you haven’t heard yet, vOneCloud is an OpenNebula distribution optimized to work on existing VMware vCenter deployments, easing the deployment of an enterprise-ready OpenNebula cloud in just a few minutes in VMware environments managed by familiar tools such as vSphere and vCenter Operations Manager, enabling cloud provisioning, elasticity and multi-tenancy features. vOneCloud 1.2 comes with new features -it is worth highlighting the automatic import of virtual machines running in a vCenter instance into vOneCloud, with zero downtime- as well as new components -like the Control Panel, a web interface that eases the configuration of vOneCloud services and enables one click smooth upgrades to newer versions-.

Screen Shot 2015-02-24 at 18.27.13

Community

The OpenNebula Project decided to take a step forward and change the good old mailing list to a new discourse forum as the vehicle for community support. This was a well meditated decision which we hope pleases the community, but we will also keep an open ear for alternatives!

The OpenNebula community is very engaged one and never sleeps! Check out this server for recording VM and Host monitoring traffic. Feedback on product flaws (like we receive in the mailing list and now in the new support forum) is crucial for the project. But also very important for the project is the positive feedback, like these received in twitter: blush number one and double blush number two. Keep on keeping on!

Members of the dev team get bored from time to time (not much though, too much work), a come up with amazing stuff for the community like this integration of OpenNebula and Latch.

Pushing OpenNebula to its limits is fun to watch. Like launching 100 CoreOS VMs in 3’21” -2 seconds per VM-. Woah, awesomeness should have a speed limit!

Outreach

After the second edition of the OpenNebula Conference, we are already preparing for the upcoming, third edition in Barcelona, October 2015. Interested? You are still in time for getting a good price deal for tickets. If you want to share your OpenNebula experiences, the is open as well until the end of March. Moreover, your company may be interested in the sponsorship opportunities for OpenNebulaConf 2015.

OpenNebulaConf 2015

Recently, a spanish speaking video sessions were recording capturing user experiences regarding OpenNebula, called Jornadas Rediris. If you are fluent in spanish, check the recordings because there were really good contents and insights.

We are also fostering a number of OpenNebula Technology days in several cities across the world. We will start with the Prague in the Czech Republic this 25th of March, and we plan to follow with other cities including Chicago, Dallas and Dublin. Send us an email or send it to the community discuss mailing list if you are interested in hosting a TechDay event.

During the following months, members of the OpenNebula team will be speaking in the following events:

We want to highlight the strong presence the OpenNebula project will have in the Cloud Expo Europe 2015, this 11th and 12th of March in London. OpenNebula will sponsor one of the corners of the Open Cloud & Developer Park. Besides several members of the OpenNebula Team, partners that add value to OpenNebula will be present, like viApps, CloudWeavers as well as OpenNebula Systems, the company behind OpenNebula. If you are in London, come round for some special OpenNebula talks!

If you are interested in receiving OpenNebula training, check the schedule for 2015 public classes at OpenNebula Headquarters. Please contact us if your would like to request training near you.

Remember that you can see slides and resources from past events in our Events page. We have also created a Slideshare account where you can see the slides from some of our recent presentations.