OpenStack

From Grid5000
Jump to: navigation, search

Contents

Introduction

OpenStackis a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

OpenStack Arch

Nova compute

The nova-compute service depends on a virtualization driver to manage virtual machines. By default, this driver is libvirt, which is used to drive KVM. However, the libvirt driver can also drive other hypervisor technologies, and there is a separate Xen virtualization driver for driving Xen-based virtual machines if configured to use Xen Cloud Platform (XCP) or XenServer. Open-iscsi is used to mount remote block devices, also known as volumes. Open-iscsi exposes these remote devices as local device files which can be attached to instances.

Nova network

The nova-network service depends on a number of Linux networking technologies. It uses Linux bridging to create network bridges to connect virtual machines to the physical networks. These bridges may be associated with VLANs using Linux networking VLAN support, if running in the VLAN networking mode. Iptables is used to implement security rules and implement NAT functionality, which is used for providing instances with access to the metadata service and for supporting floating IP addresses. Dnsmasq is used as a DHCP server to hand out IP addresses to virtual machine instances, as well as a DNS server.

Cinder

By default, the nova-volume service uses LVM to create and manage local volumes, and exports them via iSCSI using IET or tgt. It can also be configured to use other iSCSI-based storage technologies.

Nova dashboard

The openstack-dashboard is a Django-based application that runs behind an Apache web server by default. It uses memcache for the session cache by default. A web-based VNC client called novnc is used to provide access to the VNC consoles associated with the running KVM instances.

Deploy your OpenStack cloud en Grid'5000

The following table describes the status of OpenStack on Grid5000, On Lyon, only sagittaire cluster is available, and only with qemu, but it's doesn't work well…

Sites Debian wheezy Ubuntu precise (default)
Lille Fail.png Check.png
Lyon Fail.png Check.png
Nancy InProgress.png Check.png
Reims Fail.png Check.png
Rennes InProgress.png Check.png
Sophia InProgress.png Check.png

Prerequisite

openstackg5k is a script to deploy a full OpenStack cloud stack using Grid'5000 API and puppetlabs OpenStack modules. It is included in a Git repository called OpenStack-campaign.

Source code

Source code of openstack-campaign is available on a git repository:

Restfully

This tool uses the Restfully ruby gem to connect to the Grid'5000 API. You MUST set up a specific configuration file on the machine from where you will run openstackg5k commands. The recommended location for this file is in ~/.restfully/api.grid5000.fr.yml. You can generate one using:

Terminal.png frontend:
mkdir ~/.restfully
echo "base_uri: https://api.grid5000.fr/stable/grid5000 cache: false" > ~/.restfully/api.grid5000.fr.yml
Terminal.png frontend:
chmod 0600 ~/.restfully/api.grid5000.fr.yml

Puppetlabs OpenStack modules

OpenStackg5k uses puppet modules provided by puppetlabs to install and configure OpenStack on nodes. These modules are downloaded as GIT submodules.

Terminal.png frontend:
cd openstack-campaign

Install puppet >= 2.7.14 (for modules support).

Terminal.png frontend:
gem install --no-ri --no-rdoc puppet -v 3.3.2 --user-install

Install openstack puppet modules using puppet forge.

Terminal.png frontend:
https_proxy=http://proxy:3128 http_proxy=http://proxy:3128 $HOME/.gem/ruby/1.9.1/gems/puppet-3.3.2/bin/puppet module install puppetlabs/openstack --version 2.2.0 --modulepath $(pwd)/modules

SSH key without password

A SSH key without password is needed for password-less connections, by default this script take the key ~/.ssh/id_dsa or ~/.ssh/id_rsa.

If you don't have password-less SSH keys, generate them using:

Terminal.png frontend:
ssh-keygen -t rsa -f ~/.ssh/id_rsa -N '' -C "OpenStack tutorial ($USER)"

OpenStackg5k usage

OpenStackg5k details

The openstackg5k script will perform the following operations:

  • Reserve resources (nodes+kavlan) using API. (only on auto mode)
  • Deploy nodes in a kavlan using API. (only on auto mode)
  • Broadcast and launch puppet recipes.
    • Optional: Configure LVM for nova-volume (only If nova volumes)

Options

Terminal.png frontend:
ruby bin/openstackg5k -h
Usage: bin/openstackg5k (options)
   -u, --uri URI                    [auto] API Base URI (default: stable API)
   -d, --debug                             Enable debuging outputs
   -i, --input NODES or STDIN       [educ] Provide input node file (kavlan style) use it with educ mode
   -j, --name JOB NAME              [auto] The name of the job (default: openstackg5k)
   -k, --key KEY                    [auto] Name of then SSH key for the deployment (default: /home/sbadia/.ssh/id_dsa.pub)
   -m, --mode MODE (default: auto)         [auto|educ] Educ = More educational please (Bypass submission and deployment, for Grid'5000 School 2012)
				              		Auto = I want my cloud now (Use API for submit and deploy nodes)
   -c, --no-clean                   [auto] Disable restfully clean (jobs/deploy)
   -n, --nodes Num Nodes            [auto] Number of nodes (default: 2, min: 2)
   -s, --site SITE                  [auto] Site to launch job (default: nancy)
   -V, --version                           Show Openstackg5k version
   -v, --volumes                           Enable Nova Volumes feature (ISCSI) (ubuntu-x64-br@sbadia/ubuntu-x64-1204-custom@sbadia)
   -w, --walltime WALLTIME          [auto] Walltime of the job (default: 1) hours
   -h, --help                              Show this message

Launch a deployment

To launch a deployment with one cloud controller, and two computes for 10 hours

Terminal.png frontend:
ruby bin/openstackg5k -w 10 -v -n 3

If you followed the Deploying OpenStack using KaVLAN tutorial, use the educ mode to bypass submission and deployment (using g5k API)

Terminal.png frontend:
ruby bin/openstackg5k -m educ -i ~/kavlan_nodes
I, [2012-07-14T13:20:24.627384 #8285]  INFO -- : Finish install...

Play with OpenStack

OpenStack Dashboard

Using a SSH tunnel

According your ssh configuration, please read SSH Proxycommand

Terminal.png laptop:
ssh -L 8888:cloud-controller:80 site.g5k
http://localhost:8888
Note.png Note

A script is available to configure the basic steps (keypair,firewall,image register), see Automated tests

Using https

You can connect to every nodes from outside Grid'5000 using HTTP and HTTPS. If you setup an http server on your node, it can contacted at this address https://mynode.mysite.proxy-http.grid5000.fr/.

If you setup a HTTP server with a secure connection (HTTPS, on port 443), your node can be contacted at those two addresses: https://mynode.mysite.grid5000.fr/ and https://mynode.mysite.proxy-https.grid5000.fr/.

In Openstack case, dashboard is accessible using https only (it redirects you on https).

To enable HTTPS server on the cloud-controller, add in /etc/openstack-dashboard/local_settings.py:

SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
USE_SSL = True

Add in /etc/apache2/ports.conf :

 NameVirtualHost *:443

Change in /etc/apache2/conf.d/openstack-dashboard.conf :

 <VirtualHost *:443>
 ServerName cloud-controller
 
 SSLEngine On
 SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
 SSLCACertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
 SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
 SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
 
 WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
 WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
 Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
 <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
 Order allow,deny
 Allow from all
 </Directory>
 </VirtualHost>
 RedirectMatch permanent ^/$ /admin/

Enable SSL in apache and restart it :

Terminal.png cloud-controller:
a2enmod ssl
Terminal.png cloud-controller:
/etc/init.d/apache2 restart

You are now able to consult the dashboard with your favorite browser at: https://cloud-controller. Connections pass through intranet.grid5000.fr proxy, so you should login with your Grid'5000 user name and password. This induces that the provided certificate is the certificate of intranet.grid5000.fr, so most browsers will throw a warning because the domain name of your node obviously does not match the 'intranet.grid5000.fr' name.

Enter in dashboard

Dashboard OpenStack
  • login: admin
  • password: changeme

The dashboard does not currently permit to save an image, so we must do this step by the console.


Start your first instance

Manual way

The following operations are executed on the cloud controller, the fqdn was normally given by the script.

Terminal.png frontend:
ssh root@node-x-kavlan-x.site.grid5000.fr

OpenStack commands are based on an authentication tokens, so we start by loading these…

Terminal.png cloud:
source /root/openrc

Register an image (Glance)

An minimum image (cirros is an equivalent to ttylinux) is available on apt.grid5000.fr

Terminal.png cloud:
wget http://apt.grid5000.fr/cloud/cirros-0.3.0-x86_64-disk.img -O /tmp/cirros-0.3.0-x86_64-disk.img

We'll save it to glance, Glance is the virtual machines image manager of OpenStack

Terminal.png cloud:
glance add name="cirros-amd64" is_public=true container_format=ovf disk_format=qcow2 < /tmp/cirros-0.3.0-x86_64-disk.img
======================================================================================================[100%] 18.5M/s, ETA  0h  0m  0s
Added new image with ID: 2dc33303-8963-40bc-bfad-d50361f84196

We verify the registration and availability of the image directly under glance

Terminal.png cloud:
glance index
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
2dc33303-8963-40bc-bfad-d50361f84196 cirros-amd64                   qcow2                ovf                  9761280

Or with nova command

Terminal.png cloud:
nova image-list
+--------------------------------------+--------------+--------+--------+
|                  ID                  |  Name        | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 2dc33303-8963-40bc-bfad-d50361f84196 | cirros-amd64 | ACTIVE |        |
+--------------------------------------+--------------+--------+--------+

Add a nova keypair

To connect with ssh without password on vm, we must add a key, nova keypair handles this.

Terminal.png cloud:
ssh-keygen -f /tmp/id_rsa -t rsa -N ''
Terminal.png cloud:
nova keypair-add --pub_key /tmp/id_rsa.pub key_jdoe
Terminal.png cloud:
nova keypair-list
+------------+-------------------------------------------------+
|    Name    |                   Fingerprint                   |
+------------+-------------------------------------------------+
| key_jdoe   | 68:fa:72:41:45:52:d4:4b:36:76:77:92:f1:23:27:74 |
+------------+-------------------------------------------------+

Setup your security group

Nova (OpenStack) works with security groups to access the network, we configure a core group that allows ssh, ping and http.

Terminal.png cloud:
nova secgroup-create jdoe_test "jdoe test security group"
+-------------+----------------------------+
|     Name    |        Description         |
+-------------+----------------------------+
| jdoe_test   | jdoe test security group   |
+-------------+----------------------------+
  • Authorize port 22 (ssh)
Terminal.png cloud:
nova secgroup-add-rule jdoe_test tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
  • Authorize port 80 (http)
Terminal.png cloud:
nova secgroup-add-rule jdoe_test tcp 80 80 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
  • Authorize icmp (ping)
Terminal.png cloud:
nova secgroup-add-rule jdoe_test icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Boot your first VM

The flavors represent different types of virtual machines, you can obtain the list of flavors by issuing:

Terminal.png cloud:
nova flavor-list

Flavors depends of physical node hardware, On a graphene node (nancy) we have these flavors.

Id Flavor Memory VCPUs Root FS EBS
1 m1.tiny 512MB 1 0GB 0Gb
2 m1.small 2048MB 1 10GB 20Gb
3 m1.medium 4096MB 2 10GB 40Gb
4 m1.large 8192MB 4 10GB 80Gb
5 m1.xlarge 16384MB 8 10GB 160Gb
  • Boot your first vm, a m1.tiny with jdoe_test security group, jdoe_key key and cirros-amd64 vm. Adapt the image ID by the one returned by nova image-list.
Terminal.png cloud:
nova boot --flavor 1 --security_groups jdoe_test --image 2dc33303-8963-40bc-bfad-d50361f84196 --key_name key_jdoe jdoe_vm
+-------------------------------------+--------------------------------------+
|               Property              |                Value                 |
+-------------------------------------+--------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                               |
| OS-EXT-SRV-ATTR:host                | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                    |
| OS-EXT-STS:power_state              | 0                                    |
| OS-EXT-STS:task_state               | scheduling                           |
| OS-EXT-STS:vm_state                 | building                             |
| accessIPv4                          |                                      |
| accessIPv6                          |                                      |
| adminPass                           | W24hzNuyHT2D                         |
| config_drive                        |                                      |
| created                             | 2012-07-14T13:10:59Z                 |
| flavor                              | m1.tiny                              |
| hostId                              |                                      |
| id                                  | 89638491-ca5a-4c75-aa91-555c52c888c7 |
| image                               | cirros-amd64                         |
| key_name                            | key_jdoe                             |
| metadata                            | {}                                   |
| name                                | jdoe_vm                              |
| progress                            | 0                                    |
| status                              | BUILD                                |
| tenant_id                           | ade78701963249e5acc4a2588fd7c5ae     |
| updated                             | 2012-07-14T13:10:59Z                 |
| user_id                             | 1b445889fd974a55ade3f85130edf89a     |
+-------------------------------------+--------------------------------------+
  • Display vm creation, if status is ACTIVE, it's OK.
Terminal.png cloud:
nova show jdoe_vm
+-------------------------------------+----------------------------------------------------------+
|               Property              |                          Value                           |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-SRV-ATTR:host                | graphene-104-kavlan-4.nancy.grid5000.fr                  |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                     |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                                        |
| OS-EXT-STS:power_state              | 1                                                        |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| config_drive                        |                                                          |
| created                             | 2012-07-14T13:10:59Z                                     |
| flavor                              | m1.tiny                                                  |
| hostId                              | 430a61eed5d26c79d704a78ef24925b3ce10838b50efb33343e8739b |
| id                                  | 89638491-ca5a-4c75-aa91-555c52c888c7                     |
| image                               | cirros-amd64                                             |
| key_name                            | key_jdoe                                                 |
| metadata                            | {}                                                       |
| name                                | jdoe_vm                                                  |
| novanetwork network                 | 10.0.0.2                                                 |
| progress                            | 0                                                        |
| status                              | ACTIVE                                                   |
| tenant_id                           | ade78701963249e5acc4a2588fd7c5ae                         |
| updated                             | 2012-07-14T13:11:12Z                                     |
| user_id                             | 1b445889fd974a55ade3f85130edf89a                         |
+-------------------------------------+----------------------------------------------------------+
  • You can connect to it using the private ip and the earlier created key.
Terminal.png cloud:
ssh cirros@10.0.0.2 -i /tmp/id_rsa
  • Check that we are on a image with a cirros kernel ;-)
Terminal.png vm:
uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux

Add a floating IP

  • The floating ip corresponds to a pool, mostly public
Terminal.png cloud:
nova floating-ip-create
+------------+-------------+----------+------+
|     Ip     | Instance Id | Fixed Ip | Pool |
+------------+-------------+----------+------+
| 10.16.60.1 | None        | None     | nova |
+------------+-------------+----------+------+
Terminal.png cloud:
nova add-floating-ip jdoe_vm 10.16.60.1
Terminal.png cloud:
nova floating-ip-list
+------------+--------------------------------------+----------+------+
|     Ip     |             Instance Id              | Fixed Ip | Pool |
+------------+--------------------------------------+----------+------+
| 10.16.60.1 | 89638491-ca5a-4c75-aa91-555c52c888c7 | 10.0.0.2 | nova |
+------------+--------------------------------------+----------+------+
  • Check the connection to floating ip.
Terminal.png cloud:
ssh cirros@10.16.60.1 -i /tmp/id_rsa
Terminal.png vm:
uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux
  • Nova list command display status of running instances (state, netowork, and image id)
Terminal.png cloud:
nova list
+--------------------------------------+-----------+--------+----------------------------------+
|                  ID                  |    Name   | Status |             Networks             |
+--------------------------------------+-----------+--------+----------------------------------+
| 89638491-ca5a-4c75-aa91-555c52c888c7 | jdoe_vm   | ACTIVE | novanetwork=10.0.0.2, 10.16.60.1 |
+--------------------------------------+-----------+--------+----------------------------------+

Add a network disk (if nova-volumes)

  • Here we will create a volume (similar to a EBS in the amazon EC2 jargon)
Terminal.png cloud:
nova volume-create 6 --display_name jdoe_vol
Terminal.png cloud:
nova volume-list
+---------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                    |   Status  | Display Name | Size | Volume Type | Attached to |
+---------------------------------------+-----------+--------------+------+-------------+-------------+
|  872bc881-45f5-44a7-822f-dbdf9120f873 | available | jdoe_vol     | 6    | None        |             |
+---------------------------------------+-----------+--------------+------+-------------+-------------+
Terminal.png cloud:
nova volume-attach jdoe_vm 872bc881-45f5-44a7-822f-dbdf9120f873 /dev/vdb
Terminal.png cloud:
nova volume-list
+---------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| ID                                    | Status | Display Name | Size | Volume Type |             Attached to              |
+---------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| 872bc881-45f5-44a7-822f-dbdf9120f873  | in-use | jdoe_vol     | 6    | None        | 89638491-ca5a-4c75-aa91-555c52c888c7 |
+---------------------------------------+--------+--------------+------+-------------+--------------------------------------+
  • On the virtual machine we checked in "dmesg" the volume attachment
[  602.363273] virtio-pci 0000:00:06.0: irq 46 for MSI/MSI-X
[  602.457461]  vdb: unknown partition table
Terminal.png vm:
sudo fdisk -l /dev/vdb
Disk /dev/vdb: 6442 MB, 6442450944 bytes
16 heads, 63 sectors/track, 12483 cylinders, total 12582912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition table
Terminal.png vm:
sudo fdisk /dev/vdb
  • n for new partition, p for primary, 1 for the first, blank for all the disk (partition size) and w for write changes
$ sudo fdisk /dev/vdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd330dcee.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4, default 1): 1
First sector (2048-12582911, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-12582911, default 12582911):
Using default value 12582911

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


Terminal.png vm:
sudo mkfs.ext4 /dev/vdb1
Terminal.png vm:
sudo mount /dev/vdb1 /mnt/
Terminal.png vm:
df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev                    248936         0    248936   0% /dev
/dev/vda1                23797     13233      9336  59% /
tmpfs                   252056         0    252056   0% /dev/shm
tmpfs                      200        20       180  10% /run
/dev/vdb1              6191680    143352   5733808   2% /mnt

Automatic way

A script is available to configure the basic steps (keypair,firewall,image register).

  • Cirros (linux tty) and Ubuntu image provided
Terminal.png cloud:
bash /tmp/nova.sh cirros
  • What does this script make ?
  1. Download and register a cloud vm image.
  2. Generate a ssh key and add it into nova keypair
  3. Create and setup network security groups
  4. Boot a tiny vm

Configure your VM for Grid'5000

As your VM is inside Grid'5000 network, you need to enable proxy connection for APT

Terminal.png vm:
echo 'Acquire::http::Proxy "http://proxy.nancy.grid5000.fr:3128";' > /etc/apt/apt.conf.d/proxy-guess

Go further with OpenStack

Service token and endpoint variables must be removed before continuing.

Terminal.png cloud:
unset OS_SERVICE_TOKEN
Terminal.png cloud:
unset OS_SERVICE_ENDPOINT

Play with ec2 API

Discovery of access points to the EC2 API

Terminal.png cloud:
keystone catalog --service=ec2
Service: ec2
+-------------+--------------------------------------------------------------------+
|   Property  |                               Value                                |
+-------------+--------------------------------------------------------------------+
| adminURL    | http://graphene-102-kavlan-4.nancy.grid5000.fr:8773/services/Admin |
| internalURL | http://graphene-102-kavlan-4.nancy.grid5000.fr:8773/services/Cloud |
| publicURL   | http://graphene-102-kavlan-4.nancy.grid5000.fr:8773/services/Cloud |
| region      | RegionOne                                                          |
+-------------+--------------------------------------------------------------------+

And credential list

Terminal.png cloud:
keystone ec2-credentials-create
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
| access    | 80770aa5cd32405f914efca003d15664 |
| secret    | 88bd5c21e1af4b0994274b90ff8595a8 |
| tenant_id | ade78701963249e5acc4a2588fd7c5ae |
| user_id   | 1b445889fd974a55ade3f85130edf89a |
+-----------+----------------------------------+
Terminal.png cloud:
keystone ec2-credentials-list
+-----------+----------------------------------+----------------------------------+
|   tenant  |              access              |              secret              |
+-----------+----------------------------------+----------------------------------+
| openstack | 80770aa5cd32405f914efca003d15664 | 88bd5c21e1af4b0994274b90ff8595a8 |
+-----------+----------------------------------+----------------------------------+

Now we can configure our credentials

export EC2_ACCESS_KEY=80770aa5cd32405f914efca003d15664
export EC2_SECRET_KEY=88bd5c21e1af4b0994274b90ff8595a8
export EC2_URL=http://graphene-102-kavlan-4.nancy.grid5000.fr:8773/services/Cloud
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it

And list the vm command with eucalyptus (eucalyptus-tools are compatible with ec2 tools)

Terminal.png cloud:
euca-describe-instances
RESERVATION	r-7egqhz0r	ade78701963249e5acc4a2588fd7c5ae	sbadia_test
INSTANCE	i-00000001	ami-00000001	10.16.60.1	10.0.0.2	running	key_sbadia (ade78701963249e5acc4a2588fd7c5ae, graphene-104-kavlan- 4.nancy.grid5000.fr)	0		m1.tiny	2012-07-14T13:10:59.000Z	nova				monitoring-disabled	10.16.60.1 10.0.0.2			instance-store

List virtual images

Terminal.png cloud:
euca-describe-images
IMAGE	ami-00000001	None (cirros-amd64)		available	public			machine				instance-store

And regions

Terminal.png cloud:
euca-describe-regions
REGION	nova	http://10.16.2.102:8773/services/Cloud

Administrators commands (Nova Manage)

Flavor

Terminal.png cloud:
nova-manage flavor list
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 10GB, Ephemeral: 40Gb, FlavorID: 3, Swap: 0MB, RXTX Factor: 1.0
m1.large: Memory: 8192MB, VCPUS: 4, Root: 10GB, Ephemeral: 80Gb, FlavorID: 4, Swap: 0MB, RXTX Factor: 1.0
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 0GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 0MB, RXTX Factor: 1.0
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 10GB, Ephemeral: 160Gb, FlavorID: 5, Swap: 0MB, RXTX Factor: 1.0
m1.small: Memory: 2048MB, VCPUS: 1, Root: 10GB, Ephemeral: 20Gb, FlavorID: 2, Swap: 0MB, RXTX Factor: 1.0

Services

For the status of nova services ( :-) = everything is ok, xxx = problem)

Terminal.png cloud:
nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth graphene-102-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:47
nova-scheduler   graphene-102-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:46
nova-network     graphene-102-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:47
nova-cert        graphene-102-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:47
nova-compute     graphene-103-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:46
nova-compute     graphene-104-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:41
nova-volume      graphene-104-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:47
nova-volume      graphene-103-kavlan-4.nancy.grid5000.fr nova             enabled    :-)   2012-07-14 13:31:47

Console-log

Usefull to see the boot process (if something goes wrong), but also the image passwords :-D

Terminal.png cloud:
nova console-log jdoe_vm
  ____               ____  ____
 / __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/  /_/   \____/___/
 http://launchpad.net/cirros
 
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.


Documentation

More information on tools used in this tutorial

Tutorial developers

Upstream updates

rake modules:subup
rake repo:clean

Issues, features

Retrieved from "https://www.grid5000.fr/mediawiki/index.php?title=OpenStack&oldid=49583"
Personal tools
Namespaces

Variants
Actions
Public Portal
Users Portal
Admin portal
Wiki special pages
Toolbox