IaaS Clouds on Grid'5000
This tutorial introduces a set of tools designed to automatically deploy and configure Infrastructure-as-a-Service Cloud frameworks on Grid'5000.
OpenNebula is a fully open-source IaaS Cloud implementation designed to address the requirements of business use cases. It consists in a set of virtualization tools for managing local data centers, but also for interconnecting multiple Cloud environments. The main design principles on which the OpenNebula project relies include a modular and extensible architecture, scalability for large-scale infrastructures, interoperability with existing Cloud offerings, open-source implementation. OpenNebula aims at providing standardized interfaces for managing virtual machines and data, such as the Amazon EC2 or OCCI API.
Deploying OpenNebula on Grid'5000
Making a reservation
In order to deploy OpenNebula on Grid'5000, we first have to make a reservation on one or multiple Grid'5000 sites. Here is an example of an interactive reservation on the Rennes, Nancy and Sophia sites:
In addition to the number of needed nodes, the reservation has to specify the virtual subnet you plan to use for your experiments. In this tutorial we will use a /22 subnet (range of 1024 IPs) for each of the sites. The reservation request will return a batch reservation ID for each Grid'5000 site, which we will further use to deploy the cloud framework:
sophia] Reservation success on sophia : batchId =
nancy] Reservation success on nancy : batchId =
rennes] Reservation success on rennes : batchId =
419881[OAR_GRIDSUB] Grid reservation id = 33897 [OAR_GRIDSUB] SSH KEY : /tmp/oargrid//oargrid_ssh_key_acarpena_33897 You can use this key to connect directly to your OAR nodes with the oar user.
The OpenNebula deployment scripts have a set of Ruby gem dependencies. To install them on a site's frontend, run the following commands:
The deployment scripts can be found on the Rennes site.
The OpenNebula deployment script supports multiple job reservations, provided by the user through a YAML file. Create a file named
jobs.yml in your home directory with the following contents, replacing highlighted values by the ones corresponding to your job:
--- - hypervisor: kvm uid:
rennes- hypervisor: kvm uid:
sophia- hypervisor: kvm uid:
Go to the scripts' location:
Then execute the script with the
jobs.yml file as a parameter:
The script will deploy a Debian Squeeze environment on the reserved nodes, download the latest OpenNebula packages (currently the OpenNebula 3.4 release) and finally it will install and configure the OpenNebula cloud. It will consist in a set of compute nodes that will host virtual machines and a frontend node, which will be used as the Cloud entry point.
After a successful deployment, the script will print the hostname of the frontend node:
I, [2012-03-27T17:55:49.682093 #8244] INFO -- : Broadcasting node configurations I, [2012-03-27T17:55:51.155931 #8244] INFO -- : Configuring and installing OpenNebula (please wait) I, [2012-03-27T17:56:10.686188 #8244] INFO -- : Installation finished OpenNebula cloud deployed with 5 nodes: - The OpenNebula frontend node (ONfrontend) is
Verifying the deployment
Connect to the frontend node as root, replacing the hostmane with the one returned by the deployment.
Login as the oneadmin user, the default administrator account provided by OpenNebula:
List the existing VMM nodes, using the following command:
If OpenNebula was deployed and configured properly, the output should be a list of compute nodes, such as the following one:
CLUSTERRVM TCPU FCPU ACPU TMEM FMEM AMEM STAT 0 paradent-7.renn
rennes0 800 800 800 31.5G 31.3G 31.5G on 1 parapluie-20.re
rennes0 2400 2400 2400 47.3G 46.9G 47.3G on 2 griffon-3.nancy
nancy0 800 800 800 15.7G 15.5G 15.7G on 3 griffon-5.nancy
nancy0 800 800 800 15.7G 15.5G 15.7G on 4 sol-18.sophia.g
sophia0 400 400 400 3.9G 3.8G 3.9G on 5 sol-20.sophia.g
sophia0 400 400 400 3.9G 3.8G 3.9G on
Each compute node is associated to a
cluster corresponding to the Grid'5000 site hosting the node.
The deployment scripts automatically configure a set of virtual networks that assign each Grid'5000 site a matching range of virtual IPs. Check if the virtual networks have been correctly defined:
This command should output one virtual network for each site belonging to the user reservation, each of them owned by default by the
ID USER GROUP NAME TYPE BRIDGE LEASES 0 oneadmin oneadmin rennes F br0 0 1 oneadmin oneadmin nancy F br0 0 2 oneadmin oneadmin sophia F br0 0
Creating a virtual network for each Grid'5000 site is needed to ensure the VMs are assigned valid IP addresses, as each Grid'5000 site has a private pool of virtual IPs (more details here). When a VM is deployed, it has to be configured with a predefined virtual network, and it will receive an IP address specific to its virtual network. In order for the VM to be correctly deployed, the scheduler takes into account the
cluster property of the compute nodes to deploy each VM on the Grid'5000 site that matches its IP address.
Defining and executing Virtual Machines
This section gives a short introduction to defining Virtual Machine templates and executing them as VM instances in OpenNebula. The full tutorial can be found on the OpenNebula website.
The storage management in OpenNebula 3.4 relies on the concept of datastores (OpenNebula website).
These deployment scripts create a simple, non-shared file system datastore for the VM images, which uses ssh to distribute the images to the hosts.
Datastores can be accessed through the
The datastore created automatically by the scripts is called
ONstore and it shoud be listed along with the two datastores predefined by OpenNebula:
ID NAME CLUSTER IMAGES TYPE TM 0 system - 0 - shared 1 default - 0 fs shared 100
ONstore- 1 fs ssh
Virtual Machine Images
Users can define disk images and store them in a Image Repository, as described in the OpenNebula documentation. The following example provides a description for the most common type of image, that is an operating system image.
NAME = ttylinux PATH = /home/acarpena/openNebulaImages/ttylinux/ttylinux.img TYPE = OS DESCRIPTION = "ttylinux image."
Note that the path to the image must be accessible from the OpenNebula frontend. In this example we used an image stored on the shared NFS home directory.
The Image Repository can be managed with the
To register the image in the repository, copy the previous image description to a new file and run the following command as the
The image has to be stored in one of the existing datastores previously defined in OpenNebula. In this example, we used the
ONstore repository we described at the previous step.
oneimage command can also be used to access the list of already registered images:
The deployment scripts create a default VM image and add it to the Image Repository. Upon deployment, the list of images should be the following:
IDUSER GROUP NAME SIZE TYPE REGTIME PER STAT RVMS
0oneadmin oneadmin ttylinux 40M OS 04/03 09:44:05 No rdy 0
Details of any image in the list can be retrieved by issuing a
show command for a specific image
Virtual Machine Templates
Once an image has been registered into the system, the user can define a VM template that can be later used to deploy one or several VM instances. The following example defines a VM with 512MB of memory and one CPU, along with a set of additional propertied needed for the Grid'5000 deployment:
NAME = ttylinuxVM CPU = 1 MEMORY = 512 DISK = [ IMAGE = "ttylinux" ] NIC = [ NETWORK = "rennes" ] FEATURES=[ acpi="no" ] CONTEXT = [ hostname = "$NAME$VMID", ip_public = "$NIC[IP]", gateway = "$NETWORK[GATEWAY, NETWORK=\"rennes\"]", netmask = "$NETWORK[NETWORK_MASK, NETWORK=\"rennes\"]", dns = "$NETWORK[DNS, NETWORK=\"rennes\"]", files = "/home/acarpena/openNebulaImages/ttylinux/init.sh /var/lib/one/.ssh/id_rsa.pub", target = "hdc", root_pubkey = "id_rsa.pub", username = "opennebula", user_pubkey = "id_rsa.pub" ]
To properly define a VM template that can be used in a multi-site Grid'5000 reservation, the user has to specify the following components:
- the disk image, which needs to be previously registered in the Image Repository
- the virtual network to which the VM will be attached. In our case, it has to be one of the predefined virtual networks already created by the deployment scripts.
- contextualization information, which has to include the gateway, network mask and DNS values provided by the selected virtual network, as they are specific to each Grid'5000 site as well. A full tutorial for VM contextualization in OpenNebula can be found in the latest documentation.
The Template Repository can be managed with the
To register the VM definition in the repository, copy the previous image description to a new file and run the following command as the
The list of existing templates can be retrived with the same command:
The repository should contain a set of predefined VM templates by default, each of them corresponding to a specific site in the user reservation forwarded to the deployment scripts.
IDUSER GROUP NAME REGTIME
0oneadmin oneadmin ttylinuxVM_renn 04/03 09:44:06
1oneadmin oneadmin ttylinuxVM_nanc 04/03 09:44:06
2oneadmin oneadmin ttylinuxVM_soph 04/03 09:44:07
onetemplate command provides a
show option to list the details of any stored VM template
Deploying VM instances
A VM template can be used to instantiate any number of VMs. A single VM can be executed using the following command:
If multiple VMs with similar properties are needed, the
onetemplate tool provides a
-- multiple option.
The command in the example below deploys 10 VMs simultaneously:
Once the VMs have been instantiated, they can be manages with the
onevm, which provides a list of options to control the VM life cycle.
To list the current VM instances, execute the following command:
The main columns of the output include the VM ID generated by OpenNebula, the template used to generate the VM and its status, as in the example below:
IDUSER GROUP NAME STAT CPU MEM HOSTNAME TIME
0oneadmin oneadmin one-0 runn 0 0K suno-35.sophia. 00 00:00:36
To retrieve the IP address of a specific VM, use the following command, replacing
ID with one of the listed VM IDs:
The output of this command includes the state of the running VM, as well as monitoring and contextualization information:
VIRTUAL MACHINE 0 INFORMATION ID : 0 NAME : one-0 USER : oneadmin GROUP : oneadmin STATE : ACTIVE LCM_STATE : RUNNING HOSTNAME : suno-35.sophia.grid5000.fr START TIME : 04/05 16:49:48 END TIME : - DEPLOY ID : one-0 VIRTUAL MACHINE MONITORING NET_TX : 0 NET_RX : 0 USED MEMORY : 0 USED CPU : 0 PERMISSIONS OWNER : um- GROUP : --- OTHER : --- VIRTUAL MACHINE TEMPLATE CONTEXT=[ DNS=126.96.36.199, FILES="/home/acarpena/openNebulaImages/ttylinux/init.sh /var/lib/one/.ssh/id_rsa.pub", GATEWAY=10.167.255.254, HOSTNAME=one-00, IP_PUBLIC=10.164.0.1, NETMASK=255.252.0.0, ROOT_PUBKEY=id_rsa.pub, TARGET=hdc, USERNAME=opennebula, USER_PUBKEY=id_rsa.pub ] CPU=1 DISK=[ CLONE=YES, DISK_ID=0, IMAGE=ttylinux, IMAGE_ID=0, READONLY=NO, SAVE=NO, SOURCE=/var/lib/one/images/276644896564b77dbf611ef5bada9a75, TARGET=hda, TYPE=DISK ] MEMORY=512 NAME=one-0 NIC=[ BRIDGE=br0,
IP=10.164.0.1, MAC=02:00:0a:a4:00:01, NETWORK=sophia, NETWORK_ID=1, VLAN=NO ] REQUIREMENTS="CLUSTER= \"sophia\" " TEMPLATE_ID=1 VMID=0
If the VM has reached the
running state, you can connect to it through