From Grid5000
Jump to navigation Jump to search


The Nancy and Rennes Grid'5000 sites also hosts clusters for production use (including clusters with GPUs). See Nancy:Hardware and Rennes:Hardware for details.

The usage rules differ from the rest of Grid'5000:

  • Advance reservations (oarsub -r) are not allowed (to avoid fragmentation). Only submissions (and reservations that start immediately) are allowed.
  • All Grid'5000 users can use those nodes (provided they meet the conditions stated in Grid5000:UsagePolicy), but it is expected that users outside of LORIA / Centre Inria Nancy -- Grand Est and IRISA / Centre Inria de l'Université de Rennes will use their own local production resources in priority, and mostly use those resources for tasks that require Grid'5000 features. Examples of local production clusters are Cleps (Paris), Margaret (Saclay), Plafrim (Bordeaux), etc.

Using the resources

Getting an account

Users from the Loria laboratory (LORIA/Centre Inria Nancy Grand-Est) and the Irisa (IRISA/Centre Inria de l'Université de Rennes) that want to access Grid'5000 primarily for a production usage must use that request form to open an account, like regular Grid'5000 users.

  • The following fields must be filled as follows:
    • Group Granting Access (GGA): either the group named after the research team, or if it does not belong to the team list below: loria (for Nancy) or igrida (for Rennes).
    • Laboratory: LORIA or IRISA

Other users from Nancy (not belonging to the Loria laboratory) can ask to join using the nancy-misc Group Granting Access while other users from Rennes (not belonging to the Irisa laboratory) can ask to join using the rennes-misc Group Granting Access.

  • Users are automatically subscribed to the Grid'5000 users mailing lists: This list is the user-to-user or user-to-admin communication mean to address help/support requests for Grid'5000. The technical team can be reached on

Learning to use Moyens de Calcul hosted by Grid'5000

Refer to the Production:Getting Started Production tutorial (derived from Getting Started Grid'5000 tutorial. There are other tutorial listed on the Users Home page.

Using deep learning software on Grid'5000

A tutorial for using deep learning software on Grid'5000, written by Ismael Bada is also available.

Using production resources

To access production resources, you need to submit jobs in the production queue

oarsub -q production -I
oarsub -q production -p grele -I
oarsub -q production -l nodes=2,walltime=24 './'
oarsub -q production -l walltime=24 -t deploy -I

Dashboards and status pages



Contact information and support

For support, see the Support page.



Data storage

Research teams, people of different teams, individuals can ask for different Group storages in order to manage their data at the team level. The main benefit of using Group storages is that they allow for the members of the group to share their data (corpus, datasets, results ...) and to overcome easily the quota restrictions of the home directories.

Please remember that NFS servers (the home directories are also served by a NFS server) are quite slow when it comes to process a huge amount of small files during a computation, and if your are in this case, you may consider to do the major part of your I/Os on the nodes and copy back the results on the NFS server at the end of the experiment.

See here for other kind of storage available on the platform.


Group storages are used to control the access to different storage spaces located on the storage[1-5] NFS servers (more information about the maximum capacities of each of these server can be found here). Ask to your GGA leader if your team have access to one or more storage spaces (this is the case for instance for the following teams: Bird, Capsid, Caramba, Heap, Multispeech, Optimist, Orpailleur, Semagramme, Sisr, Synalp, Tangram).


Group storages are used to control the access to different storage spaces located on the NFS server (more information about the maximum capacities of these server can be found here). Ask to your GGA leader if your team have access to one or more storage spaces (this is the case for instance for the following teams: cidre and sirocco (compactdisk storage)).

I am physically located in the LORIA/IRISA building, is there a shorter path to connect?

Where your are located in LORIA/IRISA building, you can benefit from a direct connection that does not go through Grid'5000 national access machines (access-south and access-north). To do so, use access.nancy or access.rennes (instead of access).

Terminal.png mylaptop:
Terminal.png mylaptop:

Configure an SSH alias for the local access

To establish a connection to the Grid'5000 network from the local access, you can configure your SSH client as follows:

Terminal.png laptop:
editor ~/.ssh/config
Host g5kl
  User login
  ForwardAgent no

Host *.g5kl
  User login
  ProxyCommand ssh g5k -W "$(basename %h .g5kl):%p"
  ForwardAgent no

Reminder: login is your Grid'5000 username and site is either nancy or rennes.

With such a configuration, you can:

  • connect the frontend related to your local site
Terminal.png laptop:
ssh g5kl
  • transfer files from your laptop to your local frontend (with better bandwidth than using the national Grid'5000 access)
Terminal.png laptop:
scp myFile g5kl:~/
  • access the frontend of a different site:
Terminal.png laptop:
ssh grenoble.g5kl
  • transfer files from your laptop to your a different frontend
Terminal.png laptop:
scp myFile sophia.g5kl:~/

How to access data in hosted on Inria/Loria or Inria/Irisa serveurs

Grid'5000 network is not directly connected to Inria/Loria or Inria/Irisa internal servers. If you want to access from the Grid'5000 frontend and/or the Grid'5000 nodes, you need to use a local Bastion host. If you need to regularly transfer data, it is highly recommanded to configure the SSH client on each Grid'5000 frontends.

Note.png Note

Please note that you have a different home directory on each Grid'5000 site, so you may need to replicate your SSH configuration across multiple sites.

Nancy is an access machine hosted on Loria side. That machine can be used to access all services in the Inria/Loria environment.

Terminal.png frontend:
editor ~/.ssh/config
Host accessloria
   User <code class=replace>jdoe</code> # to be replaced by your LORIA login

Host *.loria
   ProxyCommand ssh accessloria -W $(basename %h .loria):%p
   User <code class=replace>jdoe</code> # to be replaced by your LORIA login
Note.png Note

Given that only accepts logins using SSH key, you cannot simply connect with your LORIA password.

Rennes is an access machine hosted on Irisa side. That machine can be used to access all services in the Inria/Irisa environment.

Terminal.png frontend:
editor ~/.ssh/config
Host transit
   User <code class=replace>jdoe</code> # to be replaced by your IRISA login

Data hosted on Inria's NAS server is accessible on /nfs of Considering that you have set the configuration on Grenoble homedir:

Terminal.png fgrenoble:
scp transit:/nfs/ ~/local_dir

Transfer files to Grid'5000 storage

With that setup, you can now use :

  • Rsync to synchronize your data on Inria/Loria environment and data on your local home on Grid'5000 frontend
  • Sshfs to mount directly your data directory on Inria/Loria environment under your local home. <=> mount your /user/my_team/my_username (origin = on fnancy (destination = a folder on fnancy).


Terminal.png fnancy:
sshfs -o idmap=user jdoe@tregastel.loria:/users/myteam/jdoe ~/local_dir

To unmount the remote filesystem:

Terminal.png fnancy:
fusermount -u ~/local_dir

I submitted a job, there are free resources, but my job doesn't start as expected!

Most likely, this is because of our configuration of resources restriction per walltime. In order to make sure that someone requesting only a few nodes, for a small amount of time will be able to get soon enough, the nodes are split into categories. This depends on each cluster and is visible in the Gantt chart. An example of split is:

  • 20% of the nodes only accept jobs with walltime lower than 1h
  • 20% -- 2h
  • 20% -- 24h (1 day)
  • 20% -- 48h (2 days)
  • 20% -- 168h (one week)

Note that best-effort jobs are excluded from those limitations.

To see the exact walltime partition of each production cluster, have a look at the Nancy Hardware page or Rennes Hardware page.

Another OAR feature that could impact the scheduling of your jobs is the OAR scheduling with fair-sharing, which is based on the notion of karma: this feature assigns a dynamic priority to submissions based on the history of submissions by a specific user. With that feature, the jobs from users that rarely submit jobs will be generally scheduled earlier than jobs from heavy users.

I have an important demo, can I reserve all resources in advance?

There's a special challenge queue that can be used to combine resources from the classic Grid'5000 clusters and the production clusters for special events. If you would like to use it, please ask for a special permission from the executive committee.

Can I use besteffort jobs in production ?

Yes, you can submit a besteffort job on the production resources by using OAR -t besteffort option. Here is an exemple:

Terminal.png fnancy:
oarsub -t besteffort -q production./

If you didn't specify the -q production option, your job could run on both production and non-production resources.

Energy costs

Grid'5000 nodes are automatically shut down when they are not reserved so, when possible, it is a good idea to reserve nodes during cheaper time slots.

Electricity costs are currently:

  • Périodes:
    • Heures pointe: Décembre, Janvier, Février; 09H00-11H00/18H00-20H00
    • Heures Pleines Hiver: 06H00-22H00 (hors heures de pointe précisées à l’article 18).
    • Heures Creuses Hiver: 22H00-06H00
    • Heures Pleines Eté: 06H00-22H00
    • Heures Creuses Eté: 22H00-06H00
    • Le dimanche ne comprend que des heures creuses en hiver et été.
  • Cout du KWh
    • Heure pointe 10,893c€/KWh
    • Heure Pleine Hiver 6,535c€/KWh
    • Heure Creuse Hiver 4,474c€/KWh
    • Heure Pleine Eté 4,125c€/KWh
    • Heure Creuse Eté 2,580c€/KWh

How to cite / Comment citer

If you use the Grid'5000 production clusters for your research and publish your work, please add this sentence in the acknowledgements section of your paper:

Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see