EC2 Service Configuration 3.0

inlinetoc

Overview

The OpenNebula EC2 Query is a web service that enables you to launch and manage virtual machines in your OpenNebula installation through the Amazon EC2 Query Interface. In this way, you can use any EC2 Query tool or utility to access your Private Cloud. The EC2 Query web service is implemented upon the OpenNebula Cloud API (OCA) layer that exposes the full capabilities of an OpenNebula private cloud; and Sinatra, a widely used light web framework.

The current implementation includes the basic routines to use a Cloud, namely: image upload and registration, and the VM run, describe and terminate operations. The following sections explain you how to install and configure the EC2 Query web service on top of a running OpenNebula cloud.

:!: The OpenNebula EC2 Query service provides a Amazon EC2 Query API compatible interface to your cloud, that can be used alongside the native OpenNebula CLI or the libvirt interface.

:!: The OpenNebula distribution includes the tools needed to use the EC2 Query service.

Requirements & Installation

You must have an OpenNebula site properly configured and running to install the EC2 Query service, be sure to check the OpenNebula Installation and Configuration Guides to set up your private cloud first. This guide also assumes that you are familiar with the configuration and use of OpenNebula.

The EC2 Query service was installed during the OpenNebula installation, so you just need to install the following packages to meet the runtime dependencies:

  • The Amazon EC2 Query API library:

<xterm>$ sudo gem install amazon-ec2</xterm>

  • The Sinatra web framework and the thin web server:

<xterm>$ sudo gem install sinatra $ sudo gem install thin</xterm>

  • The libraries for the Client Tools (packages names are taken from the Ubuntu distribution):

<xterm> $ sudo gem install curb $ apt-get install libsqlite3-ruby $ sudo apt-get install libcurl4-gnutls-dev $ sudo apt-get libopenssl-ruby1.8 $ sudo gem install sqlite3-ruby</xterm>

Configuration

The service is configured through the /etc/one/econe.conf file, where you can set up the basic operational parameters for the EC2 Query web service, namely:

  • Connection Parameters, the xml-rpc service of the oned daemon; and the server and port for the EC2_URL. This will be the URL of your cloud.
  • Virtual Machine Types, a VM_TYPE defines the name and the OpenNebula templates for each type.

The following table summarizes the available options:

VARIABLE VALUE
:one_xmlrpc oned xmlrpc service, http://localhost:2633/RPC2
:server FQDN for your cloud
:port for incoming connections
:auth Authentication protocol for the econe server
:instance_types The VM types for your cloud

:!: The :server must be a FQDN, do not use IP's here.

:!: Preserve YAML syntax in the econe.conf file.

Example:

# OpenNebula sever contact information
:one_xmlrpc: http://localhost:2633/RPC2

# Host and port where econe server will run
:server: cloud.opennebula.org
:port: 4567

# SSL proxy that serves the API (set if is being used)
#:ssl_server: fqdm.of.the.server

# Authentication protocol for the econe server:
#   basic, for OpenNebula's user-password scheme
#   x509, for x509 certificates based authentication
:auth: ec2

# VM types allowed and its template file (inside templates directory)
:instance_types:
  :m1.small:
    :template: m1.small.erb

Cloud Users

The cloud users have to be created in the OpenNebula system by oneadmin using the oneuser utility. Once a user is registered in the system, using the same procedure as to create private cloud users, they can start using the system.

The users will authenticate using the Amazon EC2 procedure with AWSAccessKeyId their OpenNebula's username and AWSSecretAccessKey their OpenNebula's hashed password.

Networking for the Cloud VMs

By default, the templates includes a NIC interface to be attached to a virtual network. You have to create this network using the onevnet utility with the IP's you want to lease to the VMs created with the EC2 Query service. <xterm> $ onevnet create /tmp/templates/vnet ID: 4 </xterm>

Remember that you will have to add this VNet to the users group (ID:1) and make it public in order to get leases from it. <xterm> $ onevnet chgrp 4 1 $ onevnet publish 4 </xterm>

:!: You will have to update the NIC template, inside the /etc/one/ec2query_templates directory, in order to use this VNet ID

Defining VM Types

You can define as many Virtual Machine types as you want, just:

  • Create a template for the new type and place it in /etc/one/ec2query_templates. This template will be completed with the data for each cloud run-instance request, and then submitted to OpenNebula. You can start by modifying the m1.small.erb example, to adjust it to your cloud:
NAME   = eco-vm

CPU    = 1
MEMORY = 1024

OS = [ kernel     = /vmlinuz,
       initrd     = /initrd.img,
       root       = sda1,
       kernel_cmd = "ro xencons=tty console=tty1"]

DISK = [ IMAGE_ID = <%= erb_vm_info[:img_id] %> ]

NIC = [ NETWORK_ID = 4 ]

IMAGE_ID = <%= erb_vm_info[:ec2_img_id] %>
INSTANCE_TYPE = <%= erb_vm_info[:instance_type ]%>

<% if erb_vm_info[:user_data] %>
CONTEXT = [ 
	EC2_USER_DATA = "<%= erb_vm_info[:user_data] %>" ,
	TARGET = "hdc"
	]
<% end %>
  • Add a VM_TYPE attribute to /etc/one/econe.conf with the NAME for the new type and the TEMPLATE that should be use:
# VM types allowed and its template file (inside templates directory)
:instance_types:
  :m1.small:
    :template: m1.small.erb
  :m1.large:
    :template: m1.large.erb

:!: The templates are processed by the EC2 server to include specific data for the instance, you should not need to modify the <%= … %> compounds. Start by adjusting the OS, CPU and MEMORY to your needs.

Starting the Cloud Service

To start the EC2 Query service just issue the following command <xterm> $ econe-server start </xterm> You can find the econe server log file in /var/log/one/econe-server.log.

To stop the EC2 Query service: <xterm> $ econe-server stop </xterm>

Advanced Configuration

Authorization methods

OpenNebula EC2 Server supports two authentication methods. The method can be set in the econe-server.conf, as explained above. These two methods are:

EC2 Auth

In the EC2 mode, a signatured is generated based on the user credentials.

x509 Auth

This method performs the request to OpenNebula based on a x509 certicate DN (Distinguished Name). The DN is extracted from the certificate and matched to the password value in the user database (remember spaces are removed from DNs).

In order to use this method, OpenNebula must be configured with the x509 for Public Clouds settings.

Note that OpenNebula will not verify that the user is holding a valid certificate at the time of login: this is expected to be done by the external container of the EC2 server (normally Apache), whose job is to tell the user's client that the site requires a user certificate and to check that the certificate is consistently signed by the chosen Certificate Authority (CA).

:!: EC2 x509 auth method only handles the authorization of the user. Authentication of the user certificate is a complementary setup, which can rely on Apache.

Configuring a SSL Proxy

OpenNebula EC2 Query Service runs natively just on normal HTTP connections. If the extra security provided by SSL is needed, a proxy can be set up to handle the SSL connection that forwards the petition to the EC2 Query Service and takes back the answer to the client.

This set up needs:

  • A server certificate for the SSL connections
  • An HTTP proxy that understands SSL
  • EC2Query Service configuration to accept petitions from the proxy

If you want to try out the SSL setup easily, you can find in the following lines an example to set a self-signed certificate to be used by a lighttpd configured to act as an HTTP proxy to a correctly configured EC2 Query Service.

Let's assume the server were the lighttpd proxy is going to be started is called cloudserver.org. Therefore, the steps are:

1. Snakeoil Server Certificate

We are going to generate a snakeoil certificate. If using an Ubuntu system follow the next steps (otherwise your milleage may vary, but not a lot):

  • Install the ssl-cert package

<xterm> $ sudo apt-get install ssl-cert </xterm>

  • Generate the certificate

<xterm> $ sudo /usr/sbin/make-ssl-cert generate-default-snakeoil </xterm>

  • As we are using lighttpd, we need to append the private key with the certificate to obtain a server certificate valid to lighttpd

<xterm> $ sudo cat /etc/ssl/private/ssl-cert-snakeoil.key /etc/ssl/certs/ssl-cert-snakeoil.pem > /etc/lighttpd/server.pem </xterm>

2. lighttpd as a SSL HTTP Proxy

You will need to edit the /etc/lighttpd/lighttpd.conf configuration file and

  • Add the following modules (if not present already)
    • mod_access
    • mod_alias
    • mod_proxy
    • mod_accesslog
    • mod_compress
  • Change the server port to 443 if you are going to run lighttpd as root, or any number above 1024 otherwise:
server.port               = 8443
  • Add the proxy module section:
#### proxy module
## read proxy.txt for more info
proxy.server               = ( "" =>
                                ("" =>
                                 (
                                   "host" => "127.0.0.1",
                                   "port" => 4567
                                 )
                                 )
                             )


#### SSL engine
ssl.engine                 = "enable"
ssl.pemfile                = "/etc/lighttpd/server.pem"

The host must be the server hostname of the computer running the EC2Query Service, and the port the one that the EC2Query Service is running on.

3. EC2Query Service Configuration

The econe.conf needs to define the following:

# Host and port where OCA server will run
SERVER=127.0.0.1
PORT=4567	
# SSL proxy that serves the API (set if is being used)
SSL_SERVER=cloudserver.org

Once the lighttpd server is started, EC2Query petitions using HTTPS uris can be directed to https://cloudserver.org:8443, that will then be unencrypted, passed to localhost, port 4567, satisfied (hopefully), encrypted again and then passed back to the client.

Using an specific group for EC2

It is recommended to create a new group to handle the ec2 cloud users: <xterm> $ onegroup create ec2 ID: 100 </xterm> Create and add the users to the ec2 group (ID:100): <xterm> $ oneuser create clouduser my_password ID: 12 $ oneuser chgrp 12 100 </xterm>

And add default ACL rules for this group (ID:100) <xterm> $ oneacl create “@100 VM+NET+IMAGE+TEMPLATE/* CREATE+INFO_POOL_MINE” </xterm>

Also, you will have to create ACL rules so that the cloud users are able to deploy their VMs in the allowed hosts. <xterm> $ onehost list

ID NAME               RVM   TCPU   FCPU   ACPU   TMEM   FMEM   AMEM   STAT
 0 kvm01               0    800    800    800   7.8G   6.7G   7.8G     on
 1 xen01               0    800    800    800   7.8G   6.7G   7.8G     on
 3 kvm03               0    100     99    100     2G   512M     2G     on

</xterm> These rules will allow users inside the ec2 group (ID:100) to deploy VMs in the hosts kvm01 (ID:0) and kvm03 (ID:3) <xterm> $ oneacl create “@100 HOST/#1 USE” $ oneacl create “@100 HOST/#3 USE” </xterm>

You have to create a VNet network using the onevnet utility with the IP's you want to lease to the VMs created with the EC2 Query service. <xterm> $ onevnet create /tmp/templates/vnet ID: 12 </xterm>

Remember that you will have to add this VNet (ID:12) to the users group (ID:100) and make it public in order to get leases from it. <xterm> $ onevnet chgrp 12 100 $ onevnet publish 12 </xterm>

:!: You will have to update the NIC template, inside the /etc/one/ec2query_templates directory, in order to use this VNet ID