Grid5000:Home

From Grid5000
Revision as of 09:29, 26 October 2023 by Lpouilloux (talk | contribs)
Jump to navigation Jump to search
Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2025-11-24 14:51): 3 current events, 11 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2925 overall):

  • Marc Jourdan, Rémy Degenne, Emilie Kaufmann. An ε-Best-Arm Identification Algorithm for Fixed-Confidence and Beyond. Advances in Neural Information Processing Systems (NeurIPS), Dec 2023, New Orleans, United States. hal-04306214 view on HAL pdf
  • Reda Khoufache, Anisse Belhadj, Mustapha Lebbah, Hanene Azzag. Distributed MCMC Inference for Bayesian Non-parametric Latent Block Model. 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, May 2024, Taipei, Taiwan. pp.271-283, 10.1007/978-981-97-2242-6_22. hal-04623748 view on HAL pdf
  • Cédric Prigent, Alexandru Costan, Gabriel Antoniu, Loïc Cudennec. Enabling Federated Learning across the Computing Continuum: Systems, Challenges and Future Directions. Future Generation Computer Systems, 2024, 160, pp.767-783. 10.1016/j.future.2024.06.043. hal-04659211 view on HAL pdf
  • Shashikant Ilager, Daniel Balouek, Sidi Mohammed Kaddour, Ivona Brandic. Proteus: Towards Intent-driven Automated Resource Management Framework for Edge Sensor Nodes. FlexScience'24: Proceedings of the 14th Workshop on AI and Scientific Computing at Scale using Flexible Computing Infrastructures, Jun 2024, Pisa, Italy. pp.1 - 8, 10.1145/3659995.3660037. hal-04775138 view on HAL pdf
  • Nicolas Hubert, Pierre Monnin, Armelle Brun, Davy Monticolo. Sem@K: Is my knowledge graph embedding model semantic-aware?. Semantic Web – Interoperability, Usability, Applicability, 2023, 14 (6), pp.1273-1309. 10.3233/SW-233508. hal-04344975 view on HAL pdf


Latest news

Rss.svgSome changes on the hardware configuration of Grenoble nodes

We recently did somes hardware changes on clusters yeti, troll and dahu.

The changes are as follows:

  • yeti :
  • Following a malfunction of the two NVMe disks on yeti-3, an NVMe disk from yeti-1 has been transferred to yeti-3 to ensure that we have at least one functional NVMe disk per yeti node.

    New NVMe configuration of the nodes:
    • yeti-[1,3]: 1× NVMe
    • yeti-[2,4]: 2× NVMe

  • troll :
  • Due to experimentation needs, the steering committee agreed to change the hardware configuration of the troll cluster, replacing the Omnipath HPC network interconnect (interconnecting troll to yeti and dahu) by the Infiniband HPC network interconnect already available for the drac cluster.

  • dahu :
  • A few nodes of the dahu cluster recently encountered a recurrent problem with their OPA interfaces. Instead of fully retiring those nodes, we chose to disable their OPA interfaces.
    This change means that if you want to reserve a dahu node with OPA, you must specify it in your oarsub request. For example:

    oarsub -I -p "dahu and opa_count > 0"

    The nodes that have been modified are dahu-18, dahu-26 and dahu-30. More nodes may be added to this list in the future.

    Rss.svgCluster "clervaux" is now in the default queue in Luxembourg

    We are pleased to announce that the clervaux[1] cluster of Luxembourg is now available in the default queue.

    Clervaux is a cluster composed of 48 CPU nodes.

    Each node features:

  • 2x CPU Intel Xeon E5-2680 v4 (14 cores/CPU, 2 threads/cores)
  • 128 GiB RAM
  • 1x 120GB SSD SATA disk
  • This cluster was funded by the University of Luxembourg.

    [1] https://www.grid5000.fr/w/Luxembourg:Hardware#clervaux

    -- Grid'5000 Team 10:50, 21 October 2025 (CEST)

    Rss.svgTutorials of the SLICES-FR school 2025

    Ecole-SLICES-FR-2025.png

    The tutorial of the first SLICES-FR School, which was held from July 7th to 11th in Lyon are available on the following pages:

  • All tutorials
  • Grid'5000 tutorials
  • -- Grid'5000 Team 15:19, 1 October 2025 (CEST)

    Rss.svgEnd of support for Debian10 environments

    Support for the debian10/buster kadeploy environments is stopped due to the end of upstream support and compatibility issues with recent hardware.

    The last version of the debian10 environments (version 2025082716) will remain available on /grid5000. Older versions can still be accessed in the archive directory (see /grid5000/README.unmaintained-envs for more information).

    -- Grid'5000 Team 09:21, 1 October 2025 (CEST)


    Read more news

    Grid'5000 sites

    Current funding

    As from June 2008, Inria is the main contributor to Grid'5000 funding.

    INRIA

    Logo INRIA.gif

    CNRS

    CNRS-filaire-Quadri.png

    Universities

    IMT Atlantique
    Université Grenoble Alpes, Grenoble INP
    Université Rennes 1, Rennes
    Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
    Université Bordeaux 1, Bordeaux
    Université Lille 1, Lille
    École Normale Supérieure, Lyon

    Regional councils

    Aquitaine
    Auvergne-Rhône-Alpes
    Bretagne
    Champagne-Ardenne
    Provence Alpes Côte d'Azur
    Hauts de France
    Lorraine