Grid5000:Home

From Grid5000
Jump to navigation Jump to search
Slices-ri-white-color.png

Grid'5000 is a precursor infrastructure of SLICES-RI, Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.
Content on this website is partly outdated. Technical information remains relevant.

Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.


Published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2026-01-14 22:36): 6 current events, 5 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2929 overall):

  • Cédric Boscher, Nawel Benarba, Fatima Elhattab, Sara Bouchenak. Personalized Privacy-Preserving Federated Learning. Proceedings of the 25th International Middleware Conference, Dec 2024, Hong Kong, China. pp.454--466, 10.1145/3652892.3700785. hal-04770214 view on HAL pdf
  • Céline Acary-Robert, Emmanuel Agullo, Ludovic Courtès, Marek Felšöci, Konrad Hinsen, et al.. Guix-HPC Activity Report 2022–2023. Inria Bordeaux - Sud Ouest. 2024, pp.1-32. hal-04500140 view on HAL pdf
  • Cédric Prigent, Alexandru Costan, Gabriel Antoniu, Loïc Cudennec. Enabling Federated Learning across the Computing Continuum: Systems, Challenges and Future Directions. Future Generation Computer Systems, 2024, 160, pp.767-783. 10.1016/j.future.2024.06.043. hal-04659211 view on HAL pdf
  • Roblex Nana Tchakoute, Claude Tadonki, Petr Dokladal, Petr Dokladal, Youssef Mesri. A Flexible Operational Framework for Energy Profiling of Programs. 2024 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW), Nov 2024, Hilo, United States. pp.12-22, 10.1109/SBAC-PADW64858.2024.00014. hal-04819054 view on HAL pdf
  • Natalia Tomashenko, Emmanuel Vincent, Marc Tommasi. Exploiting Context-dependent Duration Features for Voice Anonymization Attack Systems. Interspeech 2025, Aug 2025, Rotterdam, Netherlands. hal-05099074 view on HAL pdf


Latest news

Rss.svgCluster Spirou is now in default queue at Louvain

We are pleased to announce that the Spirou[1] cluster of the newly installed Louvain site is now available in the default queue.

Spirou is a cluster composed of 8 Lenovo ThinkSystem SR630 V2 nodes, each featuring:

  • 2x CPU Intel Xeon Gold 5318Y (Ice Lake-SP)24 cores/CPU
  • 256 GiB RAM
  • 1x 4.0 TB HDD SATA Lenovo
  • 2x 100Gbps Mellanox network interface
  • Be aware that we noticed I/Os inconsistencies on this cluster.

    We advise users to take this into account when performing experimentations on the cluster. See the following bug for more information: https://intranet.grid5000.fr/bugzilla/show_bug.cgi?id=16938


    This cluster was funded by the Fonds de la Recherche Scientifique – FNRS (F.R.S.–FNRS), and its operation is supported by F.R.S.–FNRS and the Wallonia region (SPW).

    [1] https://www.grid5000.fr/w/Louvain:Hardware#spirou

    Best regards,

    Grid'5000 Technical Team

    -- Grid'5000 Team 10:24, 12 January 2026 (CEST)

    Rss.svgEnd of support for centOS7/8 and centOSStream8 environments

    Support for the centOS7/8 and centOSStream8 kadeploy environments is stopped due to the end of upstream support and compatibility issues with recent hardware.

    The last version of the centOS7 environments (version 2024071117), centOS8 environments (version 2024071119), centOSStream8 environments (version 2024070316) will remain available on /grid5000. Older versions can still be accessed in the archive directory (see /grid5000/README.unmaintained-envs for more information).

    -- Grid'5000 Team 08:44, 4 December 2025 (CEST)

    Rss.svgEcotaxe cluster is now in default queue at Nantes

    We are pleased to announce that the ecotaxe cluster of Nantes is now available in the default queue.

    As a reminder, ecotaxe is a cluster composed of 2 HPE ProLiant DL385 Gen10 Plus v2 servers[1].

    Each node features:

  • 2 AMD EPYC 7453 (Zen 3), 28 cores/CPU
  • 3 Nvidia A100 80GB GPU
  • 256 GB memory
  • 1x 1.92 To SSD + 2x 7.68 To SSD
  • 100 Gb/s Intel Ethernet adapter [2].
  • To submit a job on this cluster, the following command may be used:

    oarsub -t exotic -p ecotaxe

    This cluster is co-funded by Région Pays de la Loire, FEDER and REACT EU via the CPER SAMURAI [3].

    [1] https://www.grid5000.fr/w/Nantes:Hardware#ecotaxe

    [2] The observed throughput depends on multiple parameters such as the workload, the number of streams, ... [3] https://www.imt-atlantique.fr/fr/recherche-innovation/collaborer/projet/samurai

    -- Grid'5000 Team 14:10, 02 December 2025 (CET)

    Rss.svgSome changes on the hardware configuration of Grenoble nodes

    We recently did some hardware changes on clusters yeti, troll and dahu.

    The changes are as follows:

  • yeti :
  • Following a malfunction of the two NVMe disks on yeti-3, an NVMe disk from yeti-1 has been transferred to yeti-3 to ensure that we have at least one functional NVMe disk per yeti node. New NVMe configuration of the nodes:
    • yeti-[1,3]: 1× NVMe
    • yeti-[2,4]: 2× NVMe

  • troll :
  • Due to experimentation needs, the steering committee agreed to change the hardware configuration of the troll cluster, replacing the Omnipath HPC network interconnect (interconnecting troll to yeti and dahu) by the Infiniband HPC network interconnect already available for the drac cluster.

  • dahu :
  • A few nodes of the dahu cluster recently encountered a recurrent problem with their OPA interfaces. Instead of fully retiring those nodes, we chose to disable their OPA interfaces.
    This change means that if you want to reserve a dahu node with OPA, you must specify it in your oarsub request. For example:

    oarsub -I -p "dahu and opa_count > 0"

    The nodes that have been modified are dahu-18, dahu-26 and dahu-30. More nodes may be added to this list in the future.

    -- Grid'5000 Team 14:50, 24 November 2025 (CEST)


    Read more news

    Grid'5000 sites

    Current funding

    INRIA

    Logo INRIA.gif

    CNRS

    CNRS-filaire-Quadri.png

    Universities

    IMT Atlantique
    Université Grenoble Alpes, Grenoble INP
    Université Rennes 1, Rennes
    Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
    Université Bordeaux 1, Bordeaux
    Université Lille 1, Lille
    École Normale Supérieure, Lyon

    Regional councils

    Aquitaine
    Auvergne-Rhône-Alpes
    Bretagne
    Champagne-Ardenne
    Provence Alpes Côte d'Azur
    Hauts de France
    Lorraine