IaaS Clouds on Grid'5000

From Grid5000
Jump to: navigation, search


Contents



This tutorial introduces a set of tools designed to automatically deploy and configure Infrastructure-as-a-Service Cloud frameworks on Grid'5000.


OpenNebula

OpenNebula is a fully open-source IaaS Cloud implementation designed to address the requirements of business use cases. It consists in a set of virtualization tools for managing local data centers, but also for interconnecting multiple Cloud environments. The main design principles on which the OpenNebula project relies include a modular and extensible architecture, scalability for large-scale infrastructures, interoperability with existing Cloud offerings, open-source implementation. OpenNebula aims at providing standardized interfaces for managing virtual machines and data, such as the Amazon EC2 or OCCI API.

Deploying OpenNebula on Grid'5000

Making a reservation

In order to deploy OpenNebula on Grid'5000, we first have to make a reservation on one or multiple Grid'5000 sites. Here is an example of an interactive reservation on the Rennes, Nancy and Sophia sites:

Terminal.png frontend:
oargridsub -t deploy -w 2:00:00 rennes:rdef="/nodes=2+slash_22=1", sophia:rdef="/nodes=2+slash_22=1", nancy:rdef="/nodes=2+slash_22=1"

In addition to the number of needed nodes, the reservation has to specify the virtual subnet you plan to use for your experiments. In this tutorial we will use a /22 subnet (range of 1024 IPs) for each of the sites. The reservation request will return a batch reservation ID for each Grid'5000 site, which we will further use to deploy the cloud framework:

[OAR_GRIDSUB] [sophia] Reservation success on sophia : batchId = 493220
[OAR_GRIDSUB] [nancy] Reservation success on nancy : batchId = 463324
[OAR_GRIDSUB] [rennes] Reservation success on rennes : batchId = 419881
[OAR_GRIDSUB] Grid reservation id = 33897
[OAR_GRIDSUB] SSH KEY : /tmp/oargrid//oargrid_ssh_key_acarpena_33897
       You can use this key to connect directly to your OAR nodes with the oar user.

Prerequisites

The OpenNebula deployment scripts have a set of Ruby gem dependencies. To install them on a site's frontend, run the following commands:

Terminal.png frontend:

export GEM_HOME=$HOME/.gem

gem install gosen json net-ssh net-scp restfully

Deploying OpenNebula

The deployment scripts can be found on the Rennes site.

/home/acarpena/opennebula-deploy/openNebula3.4-v2

The OpenNebula deployment script supports multiple job reservations, provided by the user through a YAML file. Create a file named jobs.yml in your home directory with the following contents, replacing highlighted values by the ones corresponding to your job:

--- 
- hypervisor: kvm
   uid: 419881
   site: rennes
- hypervisor: kvm
   uid: 493220
   site: sophia
- hypervisor: kvm
   uid: 463324
   site: nancy

Go to the scripts' location:

Terminal.png frontend:
cd /home/acarpena/opennebula-deploy/openNebula3.4-v2

Then execute the script with the jobs.yml file as a parameter:

Terminal.png frontend:
./deployOpenNebula.rb ~/jobs.yml

The script will deploy a Debian Squeeze environment on the reserved nodes, download the latest OpenNebula packages (currently the OpenNebula 3.4 release) and finally it will install and configure the OpenNebula cloud. It will consist in a set of compute nodes that will host virtual machines and a frontend node, which will be used as the Cloud entry point.

After a successful deployment, the script will print the hostname of the frontend node:

I, [2012-03-27T17:55:49.682093 #8244]  INFO -- : Broadcasting node configurations
I, [2012-03-27T17:55:51.155931 #8244]  INFO -- : Configuring and installing OpenNebula (please wait)
I, [2012-03-27T17:56:10.686188 #8244]  INFO -- : Installation finished

OpenNebula cloud deployed with 5 nodes:
       - The OpenNebula frontend node (ONfrontend) is parapluie-22.rennes.grid5000.fr.

Verifying the deployment

Connect to the frontend node as root, replacing the hostmane with the one returned by the deployment.

Terminal.png frontend:
ssh root@parapluie-22.rennes.grid5000.fr

Login as the oneadmin user, the default administrator account provided by OpenNebula:

Terminal.png ONfrontend:
su - oneadmin

List the existing VMM nodes, using the following command:

Terminal.png ONfrontend:
onehost list

If OpenNebula was deployed and configured properly, the output should be a list of compute nodes, such as the following one:

  ID NAME               CLUSTER     RVM   TCPU   FCPU   ACPU   TMEM   FMEM   AMEM   STAT
  0 paradent-7.renn      rennes        0    800    800    800  31.5G  31.3G  31.5G     on
  1 parapluie-20.re      rennes        0   2400   2400   2400  47.3G  46.9G  47.3G     on
  2 griffon-3.nancy      nancy         0    800    800    800  15.7G  15.5G  15.7G     on
  3 griffon-5.nancy      nancy         0    800    800    800  15.7G  15.5G  15.7G     on
  4 sol-18.sophia.g      sophia        0    400    400    400   3.9G   3.8G   3.9G     on
  5 sol-20.sophia.g      sophia        0    400    400    400   3.9G   3.8G   3.9G     on

Each compute node is associated to a cluster corresponding to the Grid'5000 site hosting the node.

The deployment scripts automatically configure a set of virtual networks that assign each Grid'5000 site a matching range of virtual IPs. Check if the virtual networks have been correctly defined:

Terminal.png ONfrontend:
onevnet list

This command should output one virtual network for each site belonging to the user reservation, each of them owned by default by the oneadmin user:

 ID USER     GROUP    NAME              TYPE BRIDGE  LEASES
  0 oneadmin oneadmin rennes               F    br0       0
  1 oneadmin oneadmin nancy                F    br0       0
  2 oneadmin oneadmin sophia               F    br0       0

Creating a virtual network for each Grid'5000 site is needed to ensure the VMs are assigned valid IP addresses, as each Grid'5000 site has a private pool of virtual IPs (more details here). When a VM is deployed, it has to be configured with a predefined virtual network, and it will receive an IP address specific to its virtual network. In order for the VM to be correctly deployed, the scheduler takes into account the cluster property of the compute nodes to deploy each VM on the Grid'5000 site that matches its IP address.

Defining and executing Virtual Machines

This section gives a short introduction to defining Virtual Machine templates and executing them as VM instances in OpenNebula. The full tutorial can be found on the OpenNebula website.

Datastores

The storage management in OpenNebula 3.4 relies on the concept of datastores (OpenNebula website). These deployment scripts create a simple, non-shared file system datastore for the VM images, which uses ssh to distribute the images to the hosts. Datastores can be accessed through the onedatastore command.

Terminal.png ONfrontend:
onedatastore list

The datastore created automatically by the scripts is called ONstore and it shoud be listed along with the two datastores predefined by OpenNebula:

 ID NAME            CLUSTER  IMAGES TYPE     TM    
   0 system          -        0      -      shared
   1 default         -        0      fs     shared
 100 ONstore         -        1      fs     ssh

Virtual Machine Images

Users can define disk images and store them in a Image Repository, as described in the OpenNebula documentation. The following example provides a description for the most common type of image, that is an operating system image.

NAME   = ttylinux
PATH   = /home/acarpena/openNebulaImages/ttylinux/ttylinux.img
TYPE          = OS
DESCRIPTION   = "ttylinux image."

Note that the path to the image must be accessible from the OpenNebula frontend. In this example we used an image stored on the shared NFS home directory.

The Image Repository can be managed with the oneimage command. To register the image in the repository, copy the previous image description to a new file and run the following command as the oneadmin user:

Terminal.png ONfrontend:
oneimage create /path/to/image/description --datastore ONstore

The image has to be stored in one of the existing datastores previously defined in OpenNebula. In this example, we used the ONstore repository we described at the previous step.

The oneimage command can also be used to access the list of already registered images:

Terminal.png ONfrontend:
oneimage list

The deployment scripts create a default VM image and add it to the Image Repository. Upon deployment, the list of images should be the following:

 ID USER     GROUP    NAME            SIZE TYPE          REGTIME PER STAT  RVMS
  0 oneadmin oneadmin ttylinux         40M   OS   04/03 09:44:05  No  rdy     0

Details of any image in the list can be retrieved by issuing a show command for a specific image ID.

Terminal.png ONfrontend:
oneimage show ID

Virtual Machine Templates

Once an image has been registered into the system, the user can define a VM template that can be later used to deploy one or several VM instances. The following example defines a VM with 512MB of memory and one CPU, along with a set of additional propertied needed for the Grid'5000 deployment:

NAME = ttylinuxVM

CPU    = 1
MEMORY = 512

DISK = [ IMAGE  = "ttylinux" ]

NIC    = [ NETWORK = "rennes" ]

FEATURES=[ acpi="no" ]

CONTEXT = [
   hostname    = "$NAME$VMID",
   ip_public   = "$NIC[IP]",
   gateway     = "$NETWORK[GATEWAY, NETWORK=\"rennes\"]",
   netmask     = "$NETWORK[NETWORK_MASK, NETWORK=\"rennes\"]",
   dns         = "$NETWORK[DNS, NETWORK=\"rennes\"]",
   files       = "/home/acarpena/openNebulaImages/ttylinux/init.sh /var/lib/one/.ssh/id_rsa.pub",
   target      = "hdc",
   root_pubkey = "id_rsa.pub",
   username    = "opennebula",
   user_pubkey = "id_rsa.pub"
]

To properly define a VM template that can be used in a multi-site Grid'5000 reservation, the user has to specify the following components:

  • the disk image, which needs to be previously registered in the Image Repository
  • the virtual network to which the VM will be attached. In our case, it has to be one of the predefined virtual networks already created by the deployment scripts.
  • contextualization information, which has to include the gateway, network mask and DNS values provided by the selected virtual network, as they are specific to each Grid'5000 site as well. A full tutorial for VM contextualization in OpenNebula can be found in the latest documentation.

The Template Repository can be managed with the onetemplate command. To register the VM definition in the repository, copy the previous image description to a new file and run the following command as the oneadmin user:

Terminal.png ONfrontend:
onetemplate create /path/to/vm/template/description

The list of existing templates can be retrived with the same command:

Terminal.png ONfrontend:
onetemplate list

The repository should contain a set of predefined VM templates by default, each of them corresponding to a specific site in the user reservation forwarded to the deployment scripts.

 ID USER     GROUP    NAME                         REGTIME
  0 oneadmin oneadmin ttylinuxVM_renn       04/03 09:44:06
  1 oneadmin oneadmin ttylinuxVM_nanc       04/03 09:44:06
  2 oneadmin oneadmin ttylinuxVM_soph       04/03 09:44:07

The onetemplate command provides a show option to list the details of any stored VM template ID.

Terminal.png ONfrontend:
onetemplate show ID

Deploying VM instances

A VM template can be used to instantiate any number of VMs. A single VM can be executed using the following command:

Terminal.png ONfrontend:
onetemplate instantiate templateID

If multiple VMs with similar properties are needed, the onetemplate tool provides a -- multiple option. The command in the example below deploys 10 VMs simultaneously:

Terminal.png ONfrontend:
onetemplate instantiate --multiple 10 templateID


Once the VMs have been instantiated, they can be manages with the onevm, which provides a list of options to control the VM life cycle. To list the current VM instances, execute the following command:

Terminal.png ONfrontend:
onevm list

The main columns of the output include the VM ID generated by OpenNebula, the template used to generate the VM and its status, as in the example below:

   ID USER     GROUP    NAME         STAT CPU     MEM        HOSTNAME        TIME
    0 oneadmin oneadmin one-0        runn   0      0K suno-35.sophia. 00 00:00:36

To retrieve the IP address of a specific VM, use the following command, replacing ID with one of the listed VM IDs:

Terminal.png ONfrontend:
onevm show ID

The output of this command includes the state of the running VM, as well as monitoring and contextualization information:

VIRTUAL MACHINE 0 INFORMATION                                                   
ID                  : 0                    
NAME                : one-0                
USER                : oneadmin            
GROUP               : oneadmin            
STATE               : ACTIVE              
LCM_STATE           : RUNNING             
HOSTNAME            : suno-35.sophia.grid5000.fr
START TIME          : 04/05 16:49:48      
END TIME            : -                   
DEPLOY ID           : one-0               

VIRTUAL MACHINE MONITORING                                                      
NET_TX              : 0                   
NET_RX              : 0                   
USED MEMORY         : 0                   
USED CPU            : 0                   

PERMISSIONS                                                                     
OWNER               : um-                 
GROUP               : ---                 
OTHER               : ---                 

VIRTUAL MACHINE TEMPLATE                                                        
CONTEXT=[
  DNS=138.96.20.225,
  FILES="/home/acarpena/openNebulaImages/ttylinux/init.sh /var/lib/one/.ssh/id_rsa.pub",
  GATEWAY=10.167.255.254,
  HOSTNAME=one-00,
  IP_PUBLIC=10.164.0.1,
  NETMASK=255.252.0.0,
  ROOT_PUBKEY=id_rsa.pub,
  TARGET=hdc,
  USERNAME=opennebula,
  USER_PUBKEY=id_rsa.pub ]
CPU=1
DISK=[
  CLONE=YES,
  DISK_ID=0,
  IMAGE=ttylinux,
  IMAGE_ID=0,
  READONLY=NO,
  SAVE=NO,
  SOURCE=/var/lib/one/images/276644896564b77dbf611ef5bada9a75,
  TARGET=hda,
  TYPE=DISK ]
MEMORY=512
NAME=one-0
NIC=[
  BRIDGE=br0,
  IP=10.164.0.1,
  MAC=02:00:0a:a4:00:01,
  NETWORK=sophia,
  NETWORK_ID=1,
  VLAN=NO ]
REQUIREMENTS="CLUSTER= \"sophia\" "
TEMPLATE_ID=1
VMID=0

If the VM has reached the running state, you can connect to it through ssh:

Terminal.png ONfrontend:
ssh root@10.164.0.1
Personal tools
Namespaces

Variants
Actions
Public Portal
Users Portal
Admin portal
Wiki special pages
Toolbox