From Grid5000
Jump to: navigation, search
Note.png Note

This tutorial is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Warning.png Warning

This tutorial refers to an old version of OpenStack and may be broken. A newer deployment method is available at OpenStack_deployment_on_Grid5000


OpenStackis a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

OpenStack Arch

Openstack is divided in multiples services.


Initially, Nova were the only service of openstack. The Nova-compute service depends on a virtualization driver to manage virtual machines. By default, this driver is libvirt, which is used to drive KVM. However, the libvirt driver can also drive other hypervisor technologies, and there is a separate Xen virtualization driver for driving Xen-based virtual machines if configured to use Xen Cloud Platform (XCP) or XenServer. Open-iscsi is used to mount remote block devices, also known as volumes. Open-iscsi exposes these remote devices as local device files which can be attached to instances.


This service used to be integrated in Nova called nova-nework, then it became a separate project, called Quantum. Due to legislation issues, it were renamed 'Neutron'. It ensure connectivity between virtual machines created in Openstack, using 'virtual networks'. A virtual networks represent a connectivity between multiple VM that belong to a same tenant (user or project). To do so, it works with a plugin model. A plugin is used to command a virtual switch technology, or equivalent. (OpenVirtualSwitch, LinuxBridge, Cisco nexus 1000v, Brocade VCS..). To ensure isolation between virtual networks, it can use different layout (Vlan, GRE, VXlan..) or no isolation (local, flat..). Recently, Neutron teams created a meta-plugin (ML2) made to factorize common code of those plugin (Dabatase gestion in particular) and allow each plugin to communicate with virtual switch. Many plugins now come with a driver version for ML2. ML2 is currently not completely integrated. OVS is the default plugin. Iptables is used to implement security rules and implement NAT functionality, which is used for providing instances with access to the metadata service and for supporting floating IP addresses. Dnsmasq is used as a DHCP server to hand out IP addresses to virtual machine instances, as well as a DNS server.


This service used to be integrated in Nova called nova-volume. It create volumes and make them available for VM created by nova. By default, the cinder service uses LVM to create and manage local volumes, and exports them via iSCSI using IET or tgt. It can also be configured to use other iSCSI-based storage technologies, or other (NFS...).


The openstack-dashboard is a Django-based application that runs behind an Apache web server by default. It uses memcache for the session cache by default. A web-based VNC client called novnc is used to provide access to the VNC consoles associated with the running KVM instances.

Current support on grid5000

The following table describes the status of OpenStack on Grid5000, On Lyon, only sagittaire cluster is available, and only with qemu, but it's doesn't work well…

Sites Debian wheezy Ubuntu precise (default)
Lille Fail.png Check.png
Lyon Fail.png Check.png
Nancy InProgress.png Check.png
Rennes InProgress.png Check.png
Sophia InProgress.png Check.png

Deploy Grizzly (stable) on grid5000


openstackg5k is a script to deploy a full OpenStack cloud stack using Grid'5000 API and puppetlabs OpenStack modules. It is included in a Git repository called OpenStack-campaign.

Source code

Source code of openstack-campaign is available on a git repository:


This tool uses the Restfully ruby gem to connect to the Grid'5000 API. You MUST set up a specific configuration file on the machine from where you will run openstackg5k commands. The recommended location for this file is in ~/.restfully/ You can generate one using:

Terminal.png frontend:
mkdir ~/.restfully
echo "base_uri: cache: false" > ~/.restfully/
Terminal.png frontend:
chmod 0600 ~/.restfully/

Puppetlabs OpenStack modules

OpenStackg5k uses puppet modules provided by puppetlabs to install and configure OpenStack on nodes. These modules are downloaded as GIT submodules.

Terminal.png frontend:
cd openstack-campaign

Install puppet >= 2.7.14 (for modules support).

Terminal.png frontend:
gem install --no-ri --no-rdoc puppet -v 3.3.2 --user-install

Install openstack puppet modules using puppet forge.

Terminal.png frontend:
$HOME/.gem/ruby/1.9.1/gems/puppet-3.3.2/bin/puppet module install puppetlabs/openstack --version 2.2.0 --modulepath $(pwd)/modules

SSH key without password

A SSH key without password is needed for password-less connections, by default this script take the key ~/.ssh/id_dsa or ~/.ssh/id_rsa.

If you don't have password-less SSH keys, generate them using:

Terminal.png frontend:
ssh-keygen -t rsa -f ~/.ssh/id_rsa -N '' -C "OpenStack tutorial ($USER)"

OpenStackg5k usage

OpenStackg5k details

The openstackg5k script will perform the following operations:

  • Reserve resources (nodes+kavlan) using API. (only on auto mode)
  • Deploy nodes in a kavlan using API. (only on auto mode)
  • Broadcast and launch puppet recipes.
    • Optional: Configure LVM for nova-volume (only If nova volumes)


Terminal.png frontend:
ruby bin/openstackg5k -h
Usage: bin/openstackg5k (options)
   -u, --uri URI                    [auto] API Base URI (default: stable API)
   -d, --debug                             Enable debuging outputs
   -i, --input NODES or STDIN       [educ] Provide input node file (kavlan style) use it with educ mode
   -j, --name JOB NAME              [auto] The name of the job (default: openstackg5k)
   -k, --key KEY                    [auto] Name of then SSH key for the deployment (default: /home/sbadia/.ssh/
   -m, --mode MODE (default: auto)         [auto|educ] Educ = More educational please (Bypass submission and deployment, for Grid'5000 School 2012)
				              		Auto = I want my cloud now (Use API for submit and deploy nodes)
   -c, --no-clean                   [auto] Disable restfully clean (jobs/deploy)
   -n, --nodes Num Nodes            [auto] Number of nodes (default: 2, min: 2)
   -s, --site SITE                  [auto] Site to launch job (default: nancy)
   -V, --version                           Show Openstackg5k version
   -v, --volumes                           Enable Nova Volumes feature (ISCSI) (ubuntu-x64-br@sbadia/ubuntu-x64-1204-custom@sbadia)
   -w, --walltime WALLTIME          [auto] Walltime of the job (default: 1) hours
   -h, --help                              Show this message

Launch a deployment

To launch a deployment with one cloud controller, and two computes for 10 hours

Terminal.png frontend:
ruby bin/openstackg5k -w 10 -v -n 3

If you followed the OpenStack network setup with KaVLAN tutorial, use the educ mode to bypass submission and deployment (using g5k API)

Terminal.png frontend:
ruby bin/openstackg5k -m educ -i ~/kavlan_nodes
I, [2012-07-14T13:20:24.627384 #8285]  INFO -- : Finish install...

Play with OpenStack

OpenStack Dashboard

Using a SSH tunnel

According your ssh configuration, please read SSH Proxycommand

Terminal.png laptop:
ssh -L 8888:cloud-controller:80 site.g5k
Note.png Note

A script is available to configure the basic steps (keypair,firewall,image register), see Automated tests

Using https

You can connect to every nodes from outside Grid'5000 using HTTP and HTTPS. If you setup an http server on your node, it can contacted at this address

If you setup a HTTP server with a secure connection (HTTPS, on port 443), your node can be contacted at those two addresses: and

In Openstack case, dashboard is accessible using https only (it redirects you on https).

To enable HTTPS server on the cloud-controller, add in /etc/openstack-dashboard/

USE_SSL = True

Add in /etc/apache2/ports.conf :

 NameVirtualHost *:443

Change in /etc/apache2/conf.d/openstack-dashboard.conf :

 <VirtualHost *:443>
 ServerName cloud-controller
 SSLEngine On
 SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
 SSLCACertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
 SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
 SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
 WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
 WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
 Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
 <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
 Order allow,deny
 Allow from all
 RedirectMatch permanent ^/$ /admin/

Enable SSL in apache and restart it :

Terminal.png cloud-controller:
a2enmod ssl
Terminal.png cloud-controller:
/etc/init.d/apache2 restart

You are now able to consult the dashboard with your favorite browser at: https://cloud-controller. Connections pass through proxy, so you should login with your Grid'5000 user name and password. This induces that the provided certificate is the certificate of, so most browsers will throw a warning because the domain name of your node obviously does not match the '' name.

Enter in dashboard

Dashboard OpenStack
  • login: admin
  • password: changeme

The dashboard does not currently permit to save an image, so we must do this step by the console.

Start your first instance

Manual way

The following operations are executed on the cloud controller, the fqdn was normally given by the script.

Terminal.png frontend:

OpenStack commands are based on an authentication tokens, so we start by loading these…

Terminal.png cloud:
source /root/openrc

Register an image (Glance)

An minimum image (cirros is an equivalent to ttylinux) is available on

Terminal.png cloud:
wget -O /tmp/cirros-0.3.0-x86_64-disk.img

We'll save it to glance, Glance is the virtual machines image manager of OpenStack

Terminal.png cloud:
glance add name="cirros-amd64" is_public=true container_format=ovf disk_format=qcow2 < /tmp/cirros-0.3.0-x86_64-disk.img
======================================================================================================[100%] 18.5M/s, ETA  0h  0m  0s
Added new image with ID: 2dc33303-8963-40bc-bfad-d50361f84196

We verify the registration and availability of the image directly under glance

Terminal.png cloud:
glance index
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
2dc33303-8963-40bc-bfad-d50361f84196 cirros-amd64                   qcow2                ovf                  9761280

Or with nova command

Terminal.png cloud:
nova image-list
|                  ID                  |  Name        | Status | Server |
| 2dc33303-8963-40bc-bfad-d50361f84196 | cirros-amd64 | ACTIVE |        |

Add a nova keypair

To connect with ssh without password on vm, we must add a key, nova keypair handles this.

Terminal.png cloud:
ssh-keygen -f /tmp/id_rsa -t rsa -N ''
Terminal.png cloud:
nova keypair-add --pub_key /tmp/ key_jdoe
Terminal.png cloud:
nova keypair-list
|    Name    |                   Fingerprint                   |
| key_jdoe   | 68:fa:72:41:45:52:d4:4b:36:76:77:92:f1:23:27:74 |

Setup your security group

Nova (OpenStack) works with security groups to access the network, we configure a core group that allows ssh, ping and http.

Terminal.png cloud:
nova secgroup-create jdoe_test "jdoe test security group"
|     Name    |        Description         |
| jdoe_test   | jdoe test security group   |
  • Authorize port 22 (ssh)
Terminal.png cloud:
nova secgroup-add-rule jdoe_test tcp 22 22
| IP Protocol | From Port | To Port |  IP Range | Source Group |
| tcp         | 22        | 22      | |              |
  • Authorize port 80 (http)
Terminal.png cloud:
nova secgroup-add-rule jdoe_test tcp 80 80
| IP Protocol | From Port | To Port |  IP Range | Source Group |
| tcp         | 80        | 80      | |              |
  • Authorize icmp (ping)
Terminal.png cloud:
nova secgroup-add-rule jdoe_test icmp -1 -1
| IP Protocol | From Port | To Port |  IP Range | Source Group |
| icmp        | -1        | -1      | |              |

Boot your first VM

The flavors represent different types of virtual machines, you can obtain the list of flavors by issuing:

Terminal.png cloud:
nova flavor-list

Flavors depends of physical node hardware, On a graphene node (nancy) we have these flavors.

Id Flavor Memory VCPUs Root FS EBS
1 m1.tiny 512MB 1 0GB 0Gb
2 m1.small 2048MB 1 10GB 20Gb
3 m1.medium 4096MB 2 10GB 40Gb
4 m1.large 8192MB 4 10GB 80Gb
5 m1.xlarge 16384MB 8 10GB 160Gb
  • Boot your first vm, a m1.tiny with jdoe_test security group, jdoe_key key and cirros-amd64 vm. Adapt the image ID by the one returned by nova image-list.
Terminal.png cloud:
nova boot --flavor 1 --security_groups jdoe_test --image 2dc33303-8963-40bc-bfad-d50361f84196 --key_name key_jdoe jdoe_vm
|               Property              |                Value                 |
| OS-DCF:diskConfig                   | MANUAL                               |
| OS-EXT-SRV-ATTR:host                | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                    |
| OS-EXT-STS:power_state              | 0                                    |
| OS-EXT-STS:task_state               | scheduling                           |
| OS-EXT-STS:vm_state                 | building                             |
| accessIPv4                          |                                      |
| accessIPv6                          |                                      |
| adminPass                           | W24hzNuyHT2D                         |
| config_drive                        |                                      |
| created                             | 2012-07-14T13:10:59Z                 |
| flavor                              | m1.tiny                              |
| hostId                              |                                      |
| id                                  | 89638491-ca5a-4c75-aa91-555c52c888c7 |
| image                               | cirros-amd64                         |
| key_name                            | key_jdoe                             |
| metadata                            | {}                                   |
| name                                | jdoe_vm                              |
| progress                            | 0                                    |
| status                              | BUILD                                |
| tenant_id                           | ade78701963249e5acc4a2588fd7c5ae     |
| updated                             | 2012-07-14T13:10:59Z                 |
| user_id                             | 1b445889fd974a55ade3f85130edf89a     |
  • Display vm creation, if status is ACTIVE, it's OK.
Terminal.png cloud:
nova show jdoe_vm
|               Property              |                          Value                           |
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-SRV-ATTR:host                |                  |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                     |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                                        |
| OS-EXT-STS:power_state              | 1                                                        |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| config_drive                        |                                                          |
| created                             | 2012-07-14T13:10:59Z                                     |
| flavor                              | m1.tiny                                                  |
| hostId                              | 430a61eed5d26c79d704a78ef24925b3ce10838b50efb33343e8739b |
| id                                  | 89638491-ca5a-4c75-aa91-555c52c888c7                     |
| image                               | cirros-amd64                                             |
| key_name                            | key_jdoe                                                 |
| metadata                            | {}                                                       |
| name                                | jdoe_vm                                                  |
| novanetwork network                 |                                                 |
| progress                            | 0                                                        |
| status                              | ACTIVE                                                   |
| tenant_id                           | ade78701963249e5acc4a2588fd7c5ae                         |
| updated                             | 2012-07-14T13:11:12Z                                     |
| user_id                             | 1b445889fd974a55ade3f85130edf89a                         |
  • You can connect to it using the private ip and the earlier created key.
Terminal.png cloud:
ssh cirros@ -i /tmp/id_rsa
  • Check that we are on a image with a cirros kernel ;-)
Terminal.png vm:
uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux

Add a floating IP

  • The floating ip corresponds to a pool, mostly public
Terminal.png cloud:
nova floating-ip-create
|     Ip     | Instance Id | Fixed Ip | Pool |
| | None        | None     | nova |
Terminal.png cloud:
nova add-floating-ip jdoe_vm
Terminal.png cloud:
nova floating-ip-list
|     Ip     |             Instance Id              | Fixed Ip | Pool |
| | 89638491-ca5a-4c75-aa91-555c52c888c7 | | nova |
  • Check the connection to floating ip.
Terminal.png cloud:
ssh cirros@ -i /tmp/id_rsa
Terminal.png vm:
uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux
  • Nova list command display status of running instances (state, netowork, and image id)
Terminal.png cloud:
nova list
|                  ID                  |    Name   | Status |             Networks             |
| 89638491-ca5a-4c75-aa91-555c52c888c7 | jdoe_vm   | ACTIVE | novanetwork=, |

Add a network disk (if nova-volumes)

  • Here we will create a volume (similar to a EBS in the amazon EC2 jargon)
Terminal.png cloud:
nova volume-create 6 --display_name jdoe_vol
Terminal.png cloud:
nova volume-list
| ID                                    |   Status  | Display Name | Size | Volume Type | Attached to |
|  872bc881-45f5-44a7-822f-dbdf9120f873 | available | jdoe_vol     | 6    | None        |             |
Terminal.png cloud:
nova volume-attach jdoe_vm 872bc881-45f5-44a7-822f-dbdf9120f873 /dev/vdb
Terminal.png cloud:
nova volume-list
| ID                                    | Status | Display Name | Size | Volume Type |             Attached to              |
| 872bc881-45f5-44a7-822f-dbdf9120f873  | in-use | jdoe_vol     | 6    | None        | 89638491-ca5a-4c75-aa91-555c52c888c7 |
  • On the virtual machine we checked in "dmesg" the volume attachment
[  602.363273] virtio-pci 0000:00:06.0: irq 46 for MSI/MSI-X
[  602.457461]  vdb: unknown partition table
Terminal.png vm:
sudo fdisk -l /dev/vdb
Disk /dev/vdb: 6442 MB, 6442450944 bytes
16 heads, 63 sectors/track, 12483 cylinders, total 12582912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table
Terminal.png vm:
sudo fdisk /dev/vdb
  • n for new partition, p for primary, 1 for the first, blank for all the disk (partition size) and w for write changes
$ sudo fdisk /dev/vdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd330dcee.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
Partition number (1-4, default 1): 1
First sector (2048-12582911, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-12582911, default 12582911):
Using default value 12582911

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Terminal.png vm:
sudo mkfs.ext4 /dev/vdb1
Terminal.png vm:
sudo mount /dev/vdb1 /mnt/
Terminal.png vm:
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev                    248936         0    248936   0% /dev
/dev/vda1                23797     13233      9336  59% /
tmpfs                   252056         0    252056   0% /dev/shm
tmpfs                      200        20       180  10% /run
/dev/vdb1              6191680    143352   5733808   2% /mnt

Automatic way

A script is available to configure the basic steps (keypair,firewall,image register).

  • Cirros (linux tty) and Ubuntu image provided
Terminal.png cloud:
bash /tmp/ cirros
  • What does this script make ?
  1. Download and register a cloud vm image.
  2. Generate a ssh key and add it into nova keypair
  3. Create and setup network security groups
  4. Boot a tiny vm

Go further with OpenStack

Service token and endpoint variables must be removed before continuing.

Terminal.png cloud:
Terminal.png cloud:

Play with ec2 API

Discovery of access points to the EC2 API

Terminal.png cloud:
keystone catalog --service=ec2
Service: ec2
|   Property  |                               Value                                |
| adminURL    | |
| internalURL | |
| publicURL   | |
| region      | RegionOne                                                          |

And credential list

Terminal.png cloud:
keystone ec2-credentials-create
|  Property |              Value               |
| access    | 80770aa5cd32405f914efca003d15664 |
| secret    | 88bd5c21e1af4b0994274b90ff8595a8 |
| tenant_id | ade78701963249e5acc4a2588fd7c5ae |
| user_id   | 1b445889fd974a55ade3f85130edf89a |
Terminal.png cloud:
keystone ec2-credentials-list
|   tenant  |              access              |              secret              |
| openstack | 80770aa5cd32405f914efca003d15664 | 88bd5c21e1af4b0994274b90ff8595a8 |

Now we can configure our credentials

export EC2_ACCESS_KEY=80770aa5cd32405f914efca003d15664
export EC2_SECRET_KEY=88bd5c21e1af4b0994274b90ff8595a8
export EC2_URL=
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it

And list the vm command with eucalyptus (eucalyptus-tools are compatible with ec2 tools)

Terminal.png cloud:
RESERVATION	r-7egqhz0r	ade78701963249e5acc4a2588fd7c5ae	sbadia_test
INSTANCE	i-00000001	ami-00000001	running	key_sbadia (ade78701963249e5acc4a2588fd7c5ae, graphene-104-kavlan-	0		m1.tiny	2012-07-14T13:10:59.000Z	nova				monitoring-disabled			instance-store

List virtual images

Terminal.png cloud:
IMAGE	ami-00000001	None (cirros-amd64)		available	public			machine				instance-store

And regions

Terminal.png cloud:

Administrators commands (Nova Manage)


  • flavor, understand type of virtual machine (Memory/CPU/HDD)
Terminal.png cloud:
nova-manage flavor list
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 10GB, Ephemeral: 40Gb, FlavorID: 3, Swap: 0MB, RXTX Factor: 1.0
m1.large: Memory: 8192MB, VCPUS: 4, Root: 10GB, Ephemeral: 80Gb, FlavorID: 4, Swap: 0MB, RXTX Factor: 1.0
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 0GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 0MB, RXTX Factor: 1.0
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 10GB, Ephemeral: 160Gb, FlavorID: 5, Swap: 0MB, RXTX Factor: 1.0
m1.small: Memory: 2048MB, VCPUS: 1, Root: 10GB, Ephemeral: 20Gb, FlavorID: 2, Swap: 0MB, RXTX Factor: 1.0


For the status of nova services ( :-) = everything is ok, xxx = problem)

Terminal.png cloud:
nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth nova             enabled    :-)   2012-07-14 13:31:47
nova-scheduler nova             enabled    :-)   2012-07-14 13:31:46
nova-network nova             enabled    :-)   2012-07-14 13:31:47
nova-cert nova             enabled    :-)   2012-07-14 13:31:47
nova-compute nova             enabled    :-)   2012-07-14 13:31:46
nova-compute nova             enabled    :-)   2012-07-14 13:31:41
nova-volume nova             enabled    :-)   2012-07-14 13:31:47
nova-volume nova             enabled    :-)   2012-07-14 13:31:47


Usefull to see the boot process (if something goes wrong), but also the image passwords :-D

Terminal.png cloud:
nova console-log jdoe_vm
  ____               ____  ____
 / __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/  /_/   \____/___/
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.