Grid5000:Home: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
__NOTOC__ __NOEDITSECTION__
__NOTOC__ __NOEDITSECTION__
{|width="95%"
|- valign="top"
|bgcolor="#888888" style="border:1px solid #cccccc;padding:2em;padding-top:1em;"|
[[File:Slices-ri-white-color.png|260px|left]]
<b>Grid'5000 is a precursor infrastructure of [http://www.slices-ri.eu SLICES-RI], Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.</b>
<br/>
Content on this website is partly outdated. Technical information stays relevant.
|}
{|width="95%"
{|width="95%"
|- valign="top"
|- valign="top"
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]]
[[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]]
'''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.'''
'''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI.'''


Key features:
Key features:
Line 15: Line 24:
<br>
<br>
Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]].
Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]].
<b>Grid'5000 is merging with [https://fit-equipex.fr FIT] to build the [http://www.silecs.net/ SILECS Infrastructure for Large-scale Experimental Computer Science]. Read [http://www.silecs.net/wp-content/uploads/2018/04/Desprez-SILECS.pdf an Introduction to SILECS] (April 2018)</b>


<br>
<br>
Recently published documents and presentations:
Published documents and presentations:
* [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019)
* [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019)
* [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)]
* [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)]
Line 50: Line 57:
* [[Lille:Home|Lille]]
* [[Lille:Home|Lille]]
* [[Luxembourg:Home|Luxembourg]]
* [[Luxembourg:Home|Luxembourg]]
* [[Louvain:Home|Louvain]]
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
* [[Lyon:Home|Lyon]]
* [[Lyon:Home|Lyon]]
* [[Nancy:Home|Nancy]]
* [[Nancy:Home|Nancy]]
* [[Nantes:Home|Nantes]]
* [[Nantes:Home|Nantes]]
* [[Rennes:Home|Rennes]]
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
* [[Rennes:Home|Rennes]]
* [[Sophia:Home|Sophia-Antipolis]]
* [[Sophia:Home|Sophia-Antipolis]]
* [[Strasbourg:Home|Strasbourg]]
* [[Toulouse:Home|Toulouse]]
* [[Toulouse:Home|Toulouse]]
|-
|-
Line 62: Line 71:


== Current funding ==
== Current funding ==
As from June 2008, Inria is the main contributor to [[Grid5000:Funding|Grid'5000 funding]].
{|width="100%" cellspacing="3"
{|width="100%" cellspacing="3"
|-
|-

Revision as of 23:37, 6 June 2025

Slices-ri-white-color.png

Grid'5000 is a precursor infrastructure of SLICES-RI, Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.
Content on this website is partly outdated. Technical information stays relevant.

Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.


Published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2026-02-18 01:47): 5 current events, 6 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2935 overall):

  • Marc Jourdan, Clémence Réda. An Anytime Algorithm for Good Arm Identification. 2024. hal-04688141 view on HAL pdf
  • Cassandre Vey, Adrien van den Bossche, Réjane Dalcé, Georges da Costa, Olivier Negro, et al.. Experimenting IoT-Edge-Cloud- HPC Continuum on Existing Platforms. 2025 IEEE 25th International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW), IEEE, May 2025, Tromsø Norway, Norway. 10.1109/CCGridW65158.2025.00026. hal-05147272 view on HAL pdf
  • Matthieu Simonin, Anne-Cécile Orgerie. Méthodologies de calcul d'empreinte carbone sur une plateforme de calcul : exemple du site Grid'5000 de Rennes. JRES 2024 - Journées réseaux de l'enseignement et de la recherche, Renater, Dec 2024, Rennes, France. pp.1-14. hal-04893984 view on HAL pdf
  • Maurice Brémond, Hugo Brunie, Laurent Debreu, Rupert W Ford, Florian Lemarié, et al.. Poseidon: A Source-to-Source Translator for Holistic HPC Optimizations of Ocean Models on Regular Grids. SC 2024 - International Conference for High Performance Computing, Networking, Storage, and Analysis, Nov 2024, Atlanta (Georgia), United States. , pp.1-1, 2024, 10.5281/zenodo.11190458. hal-04811677 view on HAL pdf
  • Cherif Latreche, Nikos Parlavantzas, Hector A Duran-Limon. FoRLess: A Deep Reinforcement Learning-based approach for FaaS Placement in Fog. UCC 2024 - 17th IEEE/ACM International Conference on Utility and Cloud Computing, Dec 2024, Sharjah, United Arab Emirates. pp.1-9. hal-04791252 view on HAL pdf


Latest news

Rss.svgCluster Sasquatch is now in default queue at Grenoble

We are pleased to announce that the Sasquatch [1] cluster is now available in the default queue.

Sasquatch is a cluster composed of 2 HPE RL300 nodes, each featuring:

  • 1x ARM64 CPU Neoverse-N1 (Ares) 80 cores/CPU (Ampere altra) [2]
  • 1 TiB RAM
  • 1x 1.6 TB NVMe
  • 2x 25Gbps network interface (first NIC wired at 25Gbps and second NIC at 10Gbps)
  • This cluster was funded by the PEPR IA.

    [1] https://www.grid5000.fr/w/Grenoble:Hardware#sasquatch

    [2] https://amperecomputing.com/briefs/ampere-altra-family-product-brief

    Best regards, Grid'5000 Technical Team

    -- Grid'5000 Team 10:15, 11 February 2026 (CEST)

    Rss.svgCluster Spirou is now in default queue at Louvain

    We are pleased to announce that the Spirou[1] cluster of the newly installed Louvain site is now available in the default queue.

    Spirou is a cluster composed of 8 Lenovo ThinkSystem SR630 V2 nodes, each featuring:

  • 2x CPU Intel Xeon Gold 5318Y (Ice Lake-SP)24 cores/CPU
  • 256 GiB RAM
  • 1x 4.0 TB HDD SATA Lenovo
  • 2x 100Gbps Mellanox network interface
  • Be aware that we noticed I/Os inconsistencies on this cluster.

    We advise users to take this into account when performing experimentations on the cluster. See the following bug for more information: https://intranet.grid5000.fr/bugzilla/show_bug.cgi?id=16938


    This cluster was funded by the Fonds de la Recherche Scientifique – FNRS (F.R.S.–FNRS), and its operation is supported by F.R.S.–FNRS and the Wallonia region (SPW).

    [1] https://www.grid5000.fr/w/Louvain:Hardware#spirou

    Best regards,

    Grid'5000 Technical Team

    -- Grid'5000 Team 10:24, 12 January 2026 (CEST)

    Rss.svgEnd of support for centOS7/8 and centOSStream8 environments

    Support for the centOS7/8 and centOSStream8 kadeploy environments is stopped due to the end of upstream support and compatibility issues with recent hardware.

    The last version of the centOS7 environments (version 2024071117), centOS8 environments (version 2024071119), centOSStream8 environments (version 2024070316) will remain available on /grid5000. Older versions can still be accessed in the archive directory (see /grid5000/README.unmaintained-envs for more information).

    -- Grid'5000 Team 08:44, 4 December 2025 (CEST)

    Rss.svgEcotaxe cluster is now in default queue at Nantes

    We are pleased to announce that the ecotaxe cluster of Nantes is now available in the default queue.

    As a reminder, ecotaxe is a cluster composed of 2 HPE ProLiant DL385 Gen10 Plus v2 servers[1].

    Each node features:

  • 2 AMD EPYC 7453 (Zen 3), 28 cores/CPU
  • 3 Nvidia A100 80GB GPU
  • 256 GB memory
  • 1x 1.92 To SSD + 2x 7.68 To SSD
  • 100 Gb/s Intel Ethernet adapter [2].
  • To submit a job on this cluster, the following command may be used:

    oarsub -t exotic -p ecotaxe

    This cluster is co-funded by Région Pays de la Loire, FEDER and REACT EU via the CPER SAMURAI [3].

    [1] https://www.grid5000.fr/w/Nantes:Hardware#ecotaxe

    [2] The observed throughput depends on multiple parameters such as the workload, the number of streams, ... [3] https://www.imt-atlantique.fr/fr/recherche-innovation/collaborer/projet/samurai

    -- Grid'5000 Team 14:10, 02 December 2025 (CET)


    Read more news

    Grid'5000 sites

    Current funding

    INRIA

    Logo INRIA.gif

    CNRS

    CNRS-filaire-Quadri.png

    Universities

    IMT Atlantique
    Université Grenoble Alpes, Grenoble INP
    Université Rennes 1, Rennes
    Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
    Université Bordeaux 1, Bordeaux
    Université Lille 1, Lille
    École Normale Supérieure, Lyon

    Regional councils

    Aquitaine
    Auvergne-Rhône-Alpes
    Bretagne
    Champagne-Ardenne
    Provence Alpes Côte d'Azur
    Hauts de France
    Lorraine