From Grid5000
Revision as of 07:53, 21 June 2012 by Sdelamare (talk | contribs)

Jump to: navigation, search


About this document

How to add/correct an entry to the FAQ?

Just like any other page of the wiki, you can edit the FAQ yourself to improve it. If you click on one of the little "edit" placed after each question, you'll get the possibility to edit that particular question. To edit the whole page, simply choose the edit tab at the top of the page.

Publications and Grid'5000

Is there an official acknowledgement ?

Yes there is : you agreed to it when accepting the user Charter. As the charter might have been updated since, please refer to the latest version. You should use it on all publication presenting result obtained (partially) using Grid'5000.

How to mention Grid'5000 in HAL  ?

HAL is an open archive you're invited to use. If you do so, the recommended way of mentioning Grid'5000 is to use the collaboration field of submission form, with the Grid'5000 keyword, capitalized as such.

Accessing Grid'5000

What is the theory ?

In theory, you should be able to access Grid'5000 only from the production network of the lab that is hosting the Grid'5000 site hosting your account. For off-site users, a whitelist of IPs that are allowed to access the site through the access.site.grid5000.fr machine is generally maintained. You'll need ssh keys properly configured (please refer to the page dedicated to ssh in you don't understand these last words) as this machine should not allow you to log using a password.

In practise, some sites have an access.site.grid5000.fr machine reachable from any IP. You can use the machines with external access listed as accessible from anywhere as a backup.

I haven't been able to connect after May 22nd, 2008 ?

On May 22nd, 2008, all Grid'5000 frontends and servers were updated to blacklist ssh_keys known to be vulnerable after CVE-2008-0166. It is therefore possible that the ssh keys in your different Grid'5000 home directories are no longer usable, which should keep you out of Grid'5000.

If reading SSH_And_Vulnerable_keys is not sufficient to solve your access problems, you should mail support-staff with the output of the ssh -v -v login@access.site.grid5000.fr command as well as the public part of a correct key to have your situation resolved.

How to directly connect by SSH to any machine within Grid'5000 from my workstation?

This tip consists of customizing SSH configuration file ~/.ssh/config. We use for this the nc (tcpconnect could also be used) in order to bind stdin and stdout to a network connection:

Host *.g5k
   User login
   ProxyCommand ssh login@access.grid5000.fr "nc  -w 60 `basename %h .g5k` %p"

Please have a look at the SSH page to a deeper understanding of this proxy feature.

Note: Grid'5000 internal network hostnames like *.grid5000.fr are unknown outside of Grid'5000.

Warning: nc and tcpconnect commands are not always available on frontends.

How to access Internet from nodes?

For security reason, it is not possible to connect to Internet from inside Grid'5000, except using the Web proxy client. However, SSH port forwarding is not disabled and you can use the following command, that forward connection to host:port when we connect to host_g5k:port_g5k:

ssh -R port_g5k:host:port host_g5k.site.grid5000.fr
Note.png Note

If your need is only to access web sites that are not allowed thru the Web proxy server, you may ask the staff to add them to the white listed sites.

How to access InriaGforge from frontends

Accessing INRIA's Gforge repository directly from within Grid'5000 for checkout, commit or even synchronize your project data between Grid'5000's sites is very useful. To make it easy, every proxies and frontends are configured to allow transparent access to InriaGforge using webdav protocol with https. You can checkout and commit your InriaGforge project repository:

svn checkout --username gforgelogin https://scm.gforge.inria.fr/svn/project

You will be asked to accept InriaGforge SSL server certificate. If you accept it permanently, you will never be bothered again.

If you have previously checkouted a repository using svn+ssh, you can easily relocate your working copy with the following commands:

cd working_copy/
svn switch --relocate svn+ssh://scm.gforge.inria.fr/svn/project https://scm.gforge.inria.fr/svn/project .

Then, all your svn commands will use the webdav protocol without the need of maintaining SSH tunnels.

How to access InriaGforge using SSH

Warning.png Warning

The easiest way to access to InriaGforge is by using webdav protocol on frontend nodes. Using SSH is generally more difficult since you need to understand properly the basics of SSH and the global architecture of grid'5000 to get it working.

From frontends

Due to security policy, direct access to InriaGforge from inside Grid'5000 is by default prohibited. Decision to allow access to InriaGforge depends on each site. Following frontends can access InriaGforge:


For other frontends, that do not allow access to InriaGforge, there is a little SSH trick. Edit your SSH configuration file ~/.ssh/config on the host used to access Grid'5000 sites (do not forget to replace site by targeting site, g5klogin by your Grid'5000 username and fwdport by an arbitrary port number > 1024):

Host acces.site.grid5000.fr
    User g5klogin
    RemoteForward fwdport scm.gforge.inria.fr:22

Now connect to configured host (localhost:fwdport will be forwarded to scm.gforge.inria.fr:22):

ssh access.site.grid5000.fr

Then modify your SSH configuration file ~/.ssh/config on this Grid'5000 frontend (do not forget to replace gforgelogin by your InriaGforge username and fwdport by the previously defined forwarded port number):

Host scm.gforge.inria.fr
    Hostname localhost
    Port fwdport
    User gforgelogin

You can checkout and commit your InriaGforge project repository:

svn co svn+ssh://scm.gforge.inria.fr/svn/project

From nodes

Note.png Note

First of all, remember that your home directory is mounted (or can be mounted, in case of a deployed node). As a consequence, you should be able to perform the svn commands from the frontend, while actually using the files on the nodes also.

If you want to connect to a server (e.g. scm.gforge.inria.fr SSH server) that lives outside of Grid'5000, from a Grid'5000 node, you must use both of the previous tips.

First, connect to the node from your workstation directly, and create a reverse tunnel:

ssh gdx-42.orsay.g5k -R fwdport:scm.gforge.inria.fr:22

Then, reverse connect from the node to the server that is outside of Grid'5000 through the tunnel, with:

Host scm.gforge.inria.fr
   Hostname localhost
   Port fwdport
   User gforgelogin

You can checkout and commit your InriaGforge project repository:

svn co svn+ssh://scm.gforge.inria.fr/svn/project

Account management

Why does my home directory not contain the same files on every site?

Every site has its own file server, this is the user's responsibility to synchronize the personal data between his home directory on the different sites. You may use the rsync command to synchronize a remote site home directory (be careful this will erase any file that are not the same as on the local home directory):

rsync -n --delete -avz ~ frontend.site.grid5000.fr:~

NB : please remove the -n argument once you are sure you actually don't want to do a dry-run only...;)

How to get my home mounted on deployed nodes?

This is completely automatic if you deploy a *-nfs or *-big image. You can then connect using your own login, and once connected into the node, just enter your home:

 cd /home/<your login>

How to restore a wrongly deleted file?

No backup facility is provided by Grid'5000 platform. Please watch your fingers and do backup your data using external backup services.

What about disk quotas ?

You'll find that for each account and each site, disk quotas may be activated.

  • the soft limit is set to what the admins find a reasonable limit for an account on a more or less permanent basis. You can use more disk space temporarily, but you should not try and trick the system to keep that data on the shared file system.
  • the hard limit is set so as to preserve usability for other users if one of your scripts produces unexpected amounts of data. You'll not be able to override that limit.

More information is available in the quotas page.

How to increase my quota disk limitation?

Should you need higher quotas, please visit your user account settings page at https://api.grid5000.fr/ui/account (my storage tab), or send an email to support-staff@lists.grid5000.fr, explaining how much storage space you want and what for.

SSH related questions

How to fetch all the SSH host keys of one site?

To avoid answering 'yes' when connecting with SSH for the first time to hosts, the ~/.ssh/known_hosts file can be automatically generated for one site:

nodelist site | ssh-keyscan -tdsa,rsa  -f -

Please have a look at "How to get a site list of nodes?", for information on the nodelist command.

Warning.png Warning

This hint is deprecated with OAR2 and oarsh (but still worths the read if using ssh thru the allow_classic_ssh mode). Report to the staff if needed.

How to avoid SSH host key checking?

With the StrictHostKeyChecking option, SSH host key checking can be turned off. This option can be set in the ~/.ssh/config file:

StrictHostKeyChecking no

Or it can be passed on the command line:

ssh -o StrictHostKeyChecking=no host

How not to get tons of SSH errors about Man-in-the-middle attacks while deploying images ?

If you get the following error when you try to connect a machine using ssh:

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.

This is because SSH get worried by the fact that the machine answering to the connection is not the same from run to run. This is actually really logical if you just redeployed the image so it is not same system that is answering...

Technically speaking, the file /etc/ssh/ssh_host_dsa_key.pub is likely to be different in your own deployed image and in the default image. SSH will thus freak out since such replacement usually denote that someone is intercepting the communication and pretend to be the server to get informations from you.

If you don't want to care about this issues, there are several solutions:

  • Add StrictHostKeyChecking=no to your .ssh/config file to explain SSH to ignore about those errors.
  • Pass this option (StrictHostKeyChecking=no) on the command line to ssh (using -o)
  • Make sure that you have the same host_dsa_key in your own images than in defaults one. They can usually be found in the pre/post install scripts of your site.

Outside of Grid'5000 scope, the correct solution is to fix your ~/.ssh/known_hosts, either by hand or using the command ssh-keygen -R hostname.

Please have a look at the SSH page also.

What kind of public keys are supported on Grid'5000 ?

The only format of the public_keys allowed in Grid'5000 is the openSSH format.
You MUST provide and use ssh public keys in this format.

  • SSH2 like public key (NOT SUPPORTED) :
Comment: rsa-key-20090623
  • OpenSSH like public key :

To convert ssh public keys from SSH2 to OpenSSH, see this tutorial.

Experiments issues

How-to add and http site in proxy white-list ?

Internet is not accessible directly from Grid'5000, the queries go through an http proxy.

However, you can request the addition of a site in the white list. For that see WebProxy Request

Why and How to fill a experiment report

You should document the experiments that you are doing on your user report. This way, you can help the team in charge of the instrument to document the interest of the platform.

Software installation issues

What is the general philosophy ?

This is how things should work : a minimal set of software is installed on the frontends and nodes of each site. You should find the same version on the frontend and on the nodes of one site, but no effort is made for all sites to use coherent versions. If you need some other software package, you should should create a Kadeploy image including it, as you have root access on deployed images. The crux of the subject is that it seems impossible to install all the software users could need in different versions that are sometimes conflicting, therefore we don't even try.

What are the exceptions ?

  • For editing, file management and scripting software that could be useful for all users, admins often accept requests to install additional packages available for the distribution used on the frontend machine
  • For specific libraries or compilers some sites have a secondary frontend where you can compile and where we will be happy to install additional packages available for the distribution. Other sites have a specific file hierarchy where users can install software for the benefit of all.

Are any evolutions to this general philosophy planned ?

Yes, but don't expect to see big changes before 2009, were we hope to try to see how we could make the same version of some of the usual software available on all sites. A standard compilation frontend could also be put into production on all sites by the end of 2008.

Deployment related issues

Why I don't see the output of kadeploy in OAR.jobID.std* files ?

When you launch a kadeploy process in a script, the process is detached in a screen.

You can follow the process of the deployment process with :

Terminal.png frontend:
screen -r

If you are running more than one deployment process you cant list the different screen processes with :

Terminal.png frontend:
screen -ls

More information about screen here.

Deployments seem to fail for unknown reasons

From time to time, kadeploy gives me the following message for some of my nodes:

paravent-25.rennes.grid5000.fr error not there on last check

These errors are annoying. They happen when at one point in the deployment, the specified node did not come up as expected by Kadeploy, meaning that no ssh deamon could be reached before the cluster-wide timeout. This happens for different reasons

  • the node did not reboot. This happens because remote management software is not dependable at the scale of a cluster. Very few vendors plan for it to be used many times each day, therefore the failure rate isn't to good on some hardware.
  • the node did not boot correctly or fast enough. Maybe the DHCP request took more time than usual or was lost, or that the tftp server sending the kernel to boot is slow because of local network traffic or of users using the frontend as a compilation machine or as a head node for their experiment.
  • the node did copy the environment to disk fast enough. If the disk is slow for some reason, temporary or permanently, this will happen. Kadeploy v2.x series does not report this as a specific error, and therefore it can take time before the failure is identified and corrected.
  • the environment did not boot correctly or did not start its ssh server fast enough because of configuration scripts

You can do 3 things to handle these failures:

  • use a wrapper script such as katapult to retry a deployment as long as necessary
  • manually check the real status of the node using kaconsole -m node-XX.site.grid5000.fr
  • report failing nodes to the admins or suggest an increase in the timeout if a manual check reports hardware failures or correct deployment despite report by Kadeploy

My environment does not work on all clusters

There are a few reasons why a working environment is not always portable:

  1. The kernel used does not support all hardware. You are advised to base your environment on one of the reference environments to avoid dealing with this, or to carefully read the hardware section of each site to see the list of kernel drivers that need to be compiled in your environment for it to be able to boot on all clusters. Of course, when a new cluster is integrated, you might need to update your kernel for portability.
  2. The boot method used on all sites is not the same. To make a long story short, it is not possible to use grub on all sites, and sites without grub can't boot some environments where the initrd is a symbolic link. Before kadeploy 2.1.7, sites without grub could also not boot Xen environments. Here is a list of the clusters where Grub is enabled (and working) in kadeploy boot process (as of 2008-01-17) :
    • azur (sophia)
    • bordemer (bordeaux)
    • gdx (orsay)
  3. The pre and post-installation scripts do not recognize your environment, and therefore network access, console access or site specific configurations are not taken into account. Here, you can check the contents of the default pre and post-installation scripts to see the variables set by kadeploy to enable customization at post-install stage

Kadeploy fails with Image file not found!

This means that kadeploy is not able to read your environment's main archive. This can be caused by many reasons:

  • registered filename is wrong, this can be verified retrieving the registered information with kaenvironments -e environment_name
  • extension is not right (for example .tar.gz does not work, whereas .tgz is OK)
  • the directory rights are not good: kadeploy reads this file as the deploy user, so it has not the same access permission as yours: everyone should be able to read the files implied in an environment (ie: main archive and postinstall).

Some nodes fail with the message Not there on last check!

This means that the node was deployed, but a problem occured during the last phase of the deployment process: boot on the deployed partition. A lot of different causes can lead to this state. The first thing is to check the failure rate: if all the target nodes have the same deployment problem, you can be sure that there is a problem with your environment, assuming that the number of target nodes is significant. In this case, you should check if your environment follows all the minimum Environments requirements for Grid5000. kadeploy's final check for deployment is to check if a ssh server is running on the target node. You have to check with kaconsole, in this order:

  • network configuration: if the network is not configured, it won't work (a ping is enough to check this problem)
  • ssh server startup: ssh performs a lot of various verifications before launching the service, and only one problem is enough to prevent it from working. In any case, you should try to launch it on the partially deployed node with kaconsole to get a good hint to solve this issue:
gdx-174:~# /etc/init.d/ssh start
Starting OpenBSD Secure Shell server: sshd/var/run/sshd must be owned by root and not group or 

and effectively:

gdx-174:~# ls -ld /var/run/sshd/
drwxrwxrwx  2 root root 4096 Oct  7  2004 /var/run/sshd/

Kadeploy is complaining about a node already involved in an other deployment

The waring you see is

node $node is already involved in another deployment

This error occurs

  • when 2 concurrent deployments are attempted on the same node. If you have 2 simultaneous deployments, make sure you have 2 distinct sets of nodes.
  • when there is a problem in the kadeploy database: typically when a deployment ended in a strange way, this can happen. The best is to wait for about 15 minutes and retry the deployment: kadeploy can correct its database automatically.

Misc Kadeploy errors

For Kadeploy 2.1.6, additional information can be found here

How to quickly check for nodes health

You can check for nodes health, based on ICMP request, with nmap command if it is available on a site:

nodelist site | nmap -iL - -sP

Or with fping command, if it is available:

nodelist site | fping -a 2> /dev/null

(See TakTuk).

How to kill all my processes on a host ?

On the currently connected host (warning, it will disconnect you)

kill -KILL -1

How do I exit from kaconsole on cluster X from site Y

Depending on the hardware (cluster) your are doing the kaconsole, the exit sequence may differ. The Kaconsole page gives sequences for every Grid'5000 clusters.

Why are the network interfaces named eth2,eth3...ethn in my deployed environment?

This should be due to default udev rules on Debian based systems which allocate unique interface names to physical network devices. When you deploy an environment on an other node, it will detect new physical network devices and allocate them the next available interface names, incrementing it each time. Delete the appropriate rules in your environment to prevent udev from having this behaviour:

Terminal.png node:
rm /etc/udev/rules.d/*persistent-net.rules

Job submission related issues

What is the so called "best-effort" mode of OAR?

The best-effort was implemented to back-fill the cluster with jobs considered as less important without blocking "regular" jobs. To submit jobs under that policy, you simply have to select the besteffort type of job in your oarsub command.

oarsub -t besteffort script_to_launch

Jobs submitted that way will only get scheduled on processes when no other job use them (any regular job overtake besteffort jobs in the waiting queue, regardless of submission times). Moreover, these jobs are killed (as if oardel were called) when a regular job recently submitted needs the nodes used by a besteffort job.

By default, no checkpointing or automatic restart of besteffort jobs is provided. They are just killed. That is why this mode is best used with a tool which can detect the killed jobs and resubmit them. However OAR2 provides options for that. You may also have a look at tools like CIGRI or APST.

Why are nodes suspected after attempted kill of best-effort jobs ?

This is a recurring issue on Grid'5000, Bugs 1355 2362 is the first known trace of this, and a generic bug (3072) has been opened to track occurrences of this behaviour.

The scenario is the following:

  1. a user requests resource, and his request can only be fulfilled by killing best-effort jobs
  2. after best-effort jobs are killed, some nodes become suspected instead of coming online to fulfil the request. What happens here is
    1. oar sends a kill request to the best-effort jobs
    2. oar timeouts on the kill request for some nodes
    3. oar suspects the node
  3. the initial requesting user has his request either trimmed (case of a reservation) or rejected
    1. In the case of a reservation, oar waits for absent or suspected node to come up for 5 minutes before starting the job with a reduced number of resources
  4. phoenix, a script running every 15mn to attempt to automatically repair suspected nodes puts the node back online, maybe thanks to a hard reboot
  5. the script running the best-effort jobs finds this new resource, and schedules a new best-effort job on it, to the intense frustration of the initial user

Our best guess on the reason for step 2.2 is that some best-effort jobs trash the resources so hard that time for clean-up is longer than the timeout. Waiting for a reproducible but simple use-case to be able to investigate further. The problem is that inspecting a node in such a state to understand how we could handle this better is near to impossible

How to pass arguments to my script

When you do passive submission through oarsub, you must specify a script. This script can be a simple script name or a more complex command line with arguments.

To pass arguments, you have to quote the whole command line, like in the following example:

oarsub -l nodes=4,walltime=2 "/path/to/myscript arg1 arg2 arg3"

Note: to avoid random code injection, oarsub allows only alphanumeric characters ([a-zA-Z0-9_]), whitespace characters ([ \t\n\r\f\v]) and few others ([/.-]) inside its command line argument.

Why are /core and -t deploy or -t use_classic_ssh incompatible ?

Jobs with type deploy or type allow_classic_ssh imply the exclusive usage of a node. Therefore, specifying core information for your submission can only lead to some inconsistencies. It is therefore prohibited by an admission rule.

Why did my advance reservation started with not all the resources I requested ?

Since OAR 2.2.12, an advance reservation is validated regardless of the state of resources being either:

  1. alive
  2. suspended
  3. absent

(but not dead) at the time the reservation is required to start and during the panned walltime (because those states are transitional).

Moreover, resources allocated to an advance reservation are definitely fixed upon this validation, which means that if any of those resources becomes absent or suspected after the validation, that resource won't be replaced.

At the start time of the advance reservation then, OAR looks after any unavailable resources (absent or suspected), and whenever some exists, wait for them to return for 5 minutes, shall it append:

  • resource are in the absent state during the reboot after a kadeploy job, and then become alive again as soon as the boot complete
  • resource which good health is suspected by OAR might be fixed back by an admin or maintenance tool operation

If resources are not back yet at the time the job actually starts, these resources are lost for the job, which then provides less resources than expected indeed.

That is a price to pay for using advance reservation.


Information about reduced number of resources or reduced walltime for a reservation due to this mechanism are available in the event part of the output of

oarstat -fj jobid

Access to logs

OAR logs

A little known feature in Grid'5000 is the possibility for all users to use a read only access to oar's database. You should be able to connect as user oarreader with password read to database oar2 on all mysql.site.grid5000.fr. This gives you access to the complete history of jobs on all Grid'5000 sites. This gives you access to the production database of OAR: please be careful with your queries !!!

About OAR 2.4

How to know if a node is in energy saving mode or really absent ?

Nodes in energy saving mode are displayed with the state "Absent (standby)" by the oarnodes command.
The state "Absent (standby)" means that the node is shut down in order to save energy.
Nodes in this state will be automatically started by OAR when it will be needed.

Advanced users who check directly the OAR database can determine if a node is in energy saving mode or absent with the field "available_upto" in the resources table (In versions older than OAR 2.4, the field "available_upto" is named "cm_availability").
If energy saving is enabled on the cluster, the field "available_upto" provides a date (unix timestamp) until when the resource will be available.

  • A node "Absent" is in energy saving mode if the field "available_upto" is greater than the current unix timestamp

An example of SQL query listing absent nodes because of the energy saving mode:

SELECT distinct(network_address) FROM resources WHERE state="Absent" AND available_upto >= UNIX_TIMESTAMP()
  • A node "Absent" is really absent if :

- the field "available_upto" is equal to 0
- or the field "available_upto" is smaller than the current unix timestamp (this case should not occur upon Grid'5000)

An example of SQL query listing really absent nodes:

SELECT distinct(network_address) FROM resources WHERE state="Absent" AND (available_upto < UNIX_TIMESTAMP() OR available_upto = 0)

How to detect nodes in maintenance ?

Nodes in maintenance are nodes with a really absent state.
See above: How to know if a node is in energy saving mode or really absent ?

How to execute jobs within another one ?

This functionality is named container jobs (available only since OAR 2.3).
With this functionality it is possible to execute jobs within another one. So it is like a sub-scheduling mechanism.

  • First a job of the type container must be submitted, for example:
oarsub -I -t container -l nodes=10,walltime=2:10:00
  • Then it is possible to use the inner type to schedule the new jobs within the previously created container job:
oarsub -I -t inner=13542 -l nodes=7,walltime=0:30:00
oarsub -I -t inner=13542 -l nodes=3,walltime=0:45:00