From Grid5000
Revision as of 09:59, 13 May 2019 by Lnussbaum (talk | contribs)

Jump to: navigation, search

Grid'5000 is a large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data.

Key features:

  • provides access to a large amount of resources: 12000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team

Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Recently published documents and presentations:

Older documents:

Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).

Current status (at 2019-07-15 20:04): No current events, 3 planned (details)

Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2033 overall):

  • Aditya Arie Nugraha, Antoine Liutkus, Emmanuel Vincent. Deep neural network based multichannel audio source separation. Audio Source Separation, Springer, 2018. hal-01633858 view on HAL pdf
  • Maverick Chardet, Hélène Coullon, Dimitri Pertin, Christian Pérez. Madeus: A formal deployment model. 4PAD 2018 - 5th International Symposium on Formal Approaches to Parallel and Distributed Systems (hosted at HPCS 2018), Jul 2018, Orléans, France. pp.1-8. hal-01858150 view on HAL pdf
  • Simon Delamare, Pascal Morillon, Lucas Nussbaum. Réalisation d'expériences avec Grid'5000. JRES2017 - Journées Réseaux de l'enseignement et de la recherche, Nov 2017, Nantes, France. hal-01639524 view on HAL pdf
  • Pedro Bruel, Steven Quinito Masnada, Brice Videau, Arnaud Legrand, Jean-Marc Vincent, et al.. Autotuning under Tight Budget Constraints: A Transparent Design of Experiments Approach. CCGrid 2019 - International Symposium in Cluster, Cloud, and Grid Computing, May 2019, Larcana, Cyprus. pp.1-10. hal-02110868 view on HAL pdf
  • Benjamin Camus, Fanny Dufossé, Anne-Cécile Orgerie. The SAGITTA approach for optimizing solar energy consumption in distributed clouds with stochastic modeling. Smart Cities, Green Technologies, and Intelligent Transport Systems, pp.1-25, 2018. hal-01945821 view on HAL pdf

Latest news

Rss.svgEnabling GPU level resource reservation in OAR

We have now put in service the GPU level resource reservation in OAR in Grid'5000. OAR now allows one to reserve one (or some) of the GPUs of a server hosting many GPUs, letting the other GPUs of the server available for other jobs.

Only the reserved GPUs will be available for computing in the job, as reported by the nvidia-smi command for instance.

  • To reserve one GPU in a site, one can now run: $ oarsub -l gpu=1 ...
  • To reserve 2 GPUs on a host which possibly has more than 2 GPUs, one can run: $ oarsub -l host=1/gpu=2 ...
  • To reserve whole nodes (servers) with all GPUs, one can still run: $ oarsub -l host=3 -p "gpu_count > 0"
  • To reserve specific GPU models, one can use the "gpu_model" property in a filter: $ oarsub -l gpu=1 -p "gpu_model = 'Tesla V100'"
  • One can also filter on the cluster name after looking at the hardware pages for the description of the clusters: $ oarsub -l gpu=1 -p "cluster = 'chifflet'"

Finally, please notice that the drawgantts offer options to display GPUs.

-- Grid'5000 Team 17:00, 10 July 2019 (CET)

Rss.svgNetwork-level federation available between Grid'5000 and Fed4FIRE (and beyond)

It is now possible to connect Grid'5000 resources to other testbeds from Fed4FIRE. This is implemented by on-demand "stitching" between KaVLAN VLANs and VLANs provided by RENATER and GEANT that connect us to a Software Defined Exchange (SDX) hosted by IMEC in Belgium. This SDX is also connected to other SDXes in the US, which should make it possible to combine resources from Grid'5000 and US testbeds such as GENI, Chameleon or CloudLab.

For more information, see

-- Grid'5000 Team 15:00, 9 July 2019 (CET)

Rss.svgGrid'5000 links to the Internet upgraded to 10Gbps

Thanks to Renater which provides Grid'5000 network connectivity, we just upgraded our connection to the Internet to 10Gbps.

You should experience faster downloads and an increased speed when loading your data on the platform!

-- Grid'5000 Team 08:00, 9 July 2019 (CET)

Rss.svgTILECS Workshop -- all presentations available

The TILECS workshop (Towards an Infrastructure for Large-Scale Experimental Computer Science) was held last week in Grenoble. About 90 participants gathered to discuss the future of experimental infrastructures for Computer Science.

The slides for all presentations are now available on the workshop website:

-- Grid'5000 Team 10:00, 8 July 2019 (CET)

Read more news

Grid'5000 sites

Current funding

As from June 2008, Inria is the main contributor to Grid'5000 funding.


Logo INRIA.gif




Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Provence Alpes Côte d'Azur
Hauts de France