Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.
- provides access to a large amount of resources: 12000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
- highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
- advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
- designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
- a vibrant community of 500+ users supported by a solid technical team
Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.
Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)
Recently published documents and presentations:
Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).
Current status (at 2020-02-25 09:53)
: No current events, 2 planned (details)
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2139 overall):
- Ariel Oleksiak, Laurent Lefèvre, Pedro Alonso, Georges da Costa, Vincenzo de Maio, et al.. Energy aware ultrascale systems. Ultrascale Computing Systems, Institution of Engineering and Technology, pp.127-188, 2019, 10.1049/PBPC024E_ch. hal-02163289 view on HAL pdf
- Subarna Chatterjee, Christine Morin. Experimental Study on the Performance and Resource Utilization of Data Streaming Frameworks. CCGrid 2018 - 18th IEEE/ACM Symposium on Cluster, Cloud and Grid Computing, May 2018, Washington, DC, United States. pp.143-152, 10.1109/CCGRID.2018.00029. hal-01823697 view on HAL pdf
- Yunbo Li, Anne-Cécile Orgerie, Ivan Rodero, Betsegaw Lemma Amersho, Manish Parashar, et al.. End-to-end Energy Models for Edge Cloud-based IoT Platforms: Application to Data Stream Analysis in IoT. Future Generation Computer Systems, Elsevier, 2018, 87, pp.667-678. 10.1016/j.future.2017.12.048. hal-01673501 view on HAL pdf
- Stéphane Caux, Paul Renaud-Goud, Gustavo Rostirolla, Patricia Stolf. IT Optimization for Datacenters Under Renewable Power Constraint. 24th European Conference on Parallel Processing (Euro-Par 2018), Aug 2018, Turin, Italy. pp.339-351. hal-02305348 view on HAL pdf
- Adrien Wion, Mathieu Bouet, Luigi Iannone, Vania Conan. Let there be Chaining: How to Augment your IGP to Chain your Services. 2019. hal-02165785 view on HAL pdf
Support for persistent memory (PMEM)
Grid'5000 now features, among the different technologies it provides, some nodes with persistent memory.
Please find an introduction and some documentation on how to experiment on the persistent memory technology in the PMEM page.
-- Grid'5000 Team 17:35, February 19th 2020 (CET)
New cluster available in Nancy: grue (20 GPUs)
We have the pleasure to announce that the Grue cluster in Nancy¹ (production queue) is now available:
It features 5 Dell R7425 servers nodes with four Tesla T4², 128 GB DDR4, 1x480 GB SSD, 2 x AMD EPYC 7351, 16 cores/CPU
As this cluster features 4 GPU per node, we remind you that you can monitor GPU (and node) usage using the Ganglia tool (std environment only):
If your experiments do not require all the GPU of a single node, it is possible to reserve GPU³ at the resource level (see https://grid5000.fr/w/News#Enabling_GPU_level_resource_reservation_in_OAR for some examples).
You can also use the nvidia-smi and htop commands on your reserved nodes to get more information about your GPU/CPU usage.
This cluster has been funded by Ihe CPER LCHN project (Langues, Connaissances & Humanités Numériques, Contrat de plan État / Région Lorraine 2015-2020), and the LARSEN and MULTISPEECH teams at LORIA / Inria Nancy Grand Est.
As a reminder, since this cluster is part of the "production" queue, specific usage rules apply.
Grid'5000 users survey
We are conducting a survey to help us better understand your needs and make Grid'5000 a better research infrastructure.
We thank you in advance for taking a few minutes to complete it (you can answer in French if you prefer).
The survey is available at:
It will be open until December, 13rd.
-- Grid'5000 Team 15:00, November 26th 2019 (CET)
Read more news
As from June 2008, Inria is the main contributor to Grid'5000 funding.
Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon
Provence Alpes Côte d'Azur
Hauts de France