Grid5000:Home
Grid'5000 is a precursor infrastructure of SLICES-RI, Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.
|
Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI. Key features:
Older documents:
|
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2772 overall):
- Hee-Soo Choi, Priyansh Trivedi, Mathieu Constant, Karën Fort, Bruno Guillaume. Beyond Model Performance: Can Link Prediction Enrich French Lexical Graphs?. The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), May 2024, Turin, Italy. hal-04537462 view on HAL pdf
- Reda Khoufache, Anisse Belhadj, Hanene Azzag, Mustapha Lebbah. Distributed MCMC inference for Bayesian Non-Parametric Latent Block Model. Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), May 2024, Taipei, Taiwan. hal-04457575 view on HAL pdf
- Etienne Delort, Laura Riou, Anukriti Srivastava. Environmental Impact of Artificial Intelligence. INRIA; CEA Leti. 2023, pp.1-33. hal-04283245 view on HAL pdf
- Wilmer Garzón-Alfonso. Secure distributed workflows for biomedical data analytics. Distributed, Parallel, and Cluster Computing cs.DC. Ecole nationale supérieure Mines-Télécom Atlantique; Escuela Colombiana de Ingeniería Julio Garavito, 2023. English. NNT : 2023IMTA0351. tel-04224597 view on HAL pdf
- Rahma Hellali, Zaineb Chelly Dagdia, Karine Zeitouni. A Multi-Objective Multi-Agent Interactive Deep Reinforcement Learning Approach for Feature Selection. International conference on neural information processing, Dec 2024, Auckland (Nouvelle Zelande), New Zealand. pp.15. hal-04723314 view on HAL pdf
Latest news
Cluster "hydra" is now in the default queue in Lyon
We are pleased to announce that the hydra[1] cluster of Lyon is now available in the default queue.
As a reminder, Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].
Each node features:
Due to its bleeding-edge hardware, the usual Grid'5000 environments are not supported by default for this cluster.
(Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian 11, but **does not provide functional GPUs**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to the official Nvidia image provided for this machine and provides GPU support.
To submit a job on this cluster, the following command may be used:
oarsub -t exotic -p hydra
This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.
[1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)
[2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/
-- Grid'5000 Team 16:42, 12 June 2025 (CEST)
Cluster "estats" (Jetson nodes in Toulouse) is now kavlan capable
The network topology of the estats Jetson nodes can now be configured, just like for other clusters.
More info in the Network reconfiguration tutorial.
-- Grid'5000 Team 18:25, 21 May 2025 (CEST)
Cluster "chirop" is now in the default queue of Lille with energy monitoring.
Dear users,
We are pleased to announce that the Chirop[1] cluster of Lille is now available in the default queue.
This cluster consists of 5 HPE DL360 Gen10+ nodes with:
Energy monitoring[2] is also available for this cluster[3], provided by newly installed Wattmetres (similar to those already available at Lyon).
This cluster was funded by CPER CornelIA.
[1] https://www.grid5000.fr/w/Lille:Hardware#chirop
[2] https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial [3] https://www.grid5000.fr/w/Monitoring_Using_Kwollect#Metrics_available_in_Grid.275000
-- Grid'5000 Team 16:25, 05 May 2025 (CEST)
Change of default queue based on platform
Until now, Abaca (production) users had to specify `-q production` when reserving Abaca resources with OAR.
This is no longer necessary as your default queue is now automatically selected based on the platform your default group is associated to, as shown at https://api.grid5000.fr/explorer/selector/ and in the message displayed when connecting to a frontend.
For SLICES-FR users, there is no change since the correct queue was already selected by default.
Additionally, the "production" queue has been renamed to "abaca", although "production" will continue to work for the foreseeable future.
Please note one case where this change may affect your workflow:
When an Abaca user reserves a resource from SLICES-FR (a non-production resource), they must explicitly specify they want to use the SLICES-FR queue, which is called "default", by adding `-q default` the OAR command.
-- Abaca Grid'5000 Team 10:10, 31 March 2025 (CEST)
Grid'5000 sites
Current funding
INRIA |
CNRS |
UniversitiesIMT Atlantique |
Regional councilsAquitaine |