Grid5000:Home: Difference between revisions
No edit summary |
No edit summary |
||
Line 6: | Line 6: | ||
<b>Grid'5000 is a precursor infrastructure of [http://www.slices-ri.eu SLICES-RI], Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.</b> | <b>Grid'5000 is a precursor infrastructure of [http://www.slices-ri.eu SLICES-RI], Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.</b> | ||
<br/> | <br/> | ||
Content on this website is partly outdated. Technical information | Content on this website is partly outdated. Technical information remains relevant. | ||
|} | |} | ||
Latest revision as of 09:50, 10 June 2025
Grid'5000 is a precursor infrastructure of SLICES-RI, Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.
|
Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI. Key features:
Older documents:
|
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2772 overall):
- Louis-Claude Canon, Damien Landré, Laurent Philippe, Jean-Marc Pierson, Paul Renaud-Goud. Assessing Power Needs to Run a Workload with Quality of Service on Green Datacenters. 29th International European Conference on Parallel and Distributed Computing (EURO-PAR 2023), Aug 2023, Limassol, Cyprus. pp.229--242, 10.1007/978-3-031-39698-4_16. hal-04257315 view on HAL pdf
- Dorian Goepp, Fernando Ayats Llamas, Olivier Richard, Quentin Guilloteau. ACM REP24 Tutorial: Reproducible distributed environments with NixOS Compose. 2024, pp.1-3. hal-04613983 view on HAL pdf
- Sewade Ogun, Abraham T. Owodunni, Tobi Olatunji, Eniola Alese, Babatunde Oladimeji, et al.. 1000 African Voices: Advancing inclusive multi-speaker multi-accent speech synthesis. Interspeech 2024, Sep 2024, Kos Island, Greece. hal-04663033 view on HAL pdf
- Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset. Sanity checks for patch visualisation in prototype-based image classification. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jun 2023, Vancouvers, Canada. cea-04253851v3 view on HAL pdf
- Mateusz Gienieczko, Filip Murlak, Charles Paperman. Supporting Descendants in SIMD-Accelerated JSONPath. International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2024), 2024, San Diego (California), United States. pp.338-361, 10.4230/LIPIcs. hal-04398350 view on HAL pdf
Latest news
Cluster "hydra" is now in the default queue in Lyon
We are pleased to announce that the hydra[1] cluster of Lyon is now available in the default queue.
As a reminder, Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].
Each node features:
Due to its bleeding-edge hardware, the usual Grid'5000 environments are not supported by default for this cluster.
(Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian 11, but **does not provide functional GPUs**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to the official Nvidia image provided for this machine and provides GPU support.
To submit a job on this cluster, the following command may be used:
oarsub -t exotic -p hydra
This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.
[1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)
[2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/
-- Grid'5000 Team 16:42, 12 June 2025 (CEST)
Cluster "estats" (Jetson nodes in Toulouse) is now kavlan capable
The network topology of the estats Jetson nodes can now be configured, just like for other clusters.
More info in the Network reconfiguration tutorial.
-- Grid'5000 Team 18:25, 21 May 2025 (CEST)
Cluster "chirop" is now in the default queue of Lille with energy monitoring.
Dear users,
We are pleased to announce that the Chirop[1] cluster of Lille is now available in the default queue.
This cluster consists of 5 HPE DL360 Gen10+ nodes with:
Energy monitoring[2] is also available for this cluster[3], provided by newly installed Wattmetres (similar to those already available at Lyon).
This cluster was funded by CPER CornelIA.
[1] https://www.grid5000.fr/w/Lille:Hardware#chirop
[2] https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial [3] https://www.grid5000.fr/w/Monitoring_Using_Kwollect#Metrics_available_in_Grid.275000
-- Grid'5000 Team 16:25, 05 May 2025 (CEST)
Change of default queue based on platform
Until now, Abaca (production) users had to specify `-q production` when reserving Abaca resources with OAR.
This is no longer necessary as your default queue is now automatically selected based on the platform your default group is associated to, as shown at https://api.grid5000.fr/explorer/selector/ and in the message displayed when connecting to a frontend.
For SLICES-FR users, there is no change since the correct queue was already selected by default.
Additionally, the "production" queue has been renamed to "abaca", although "production" will continue to work for the foreseeable future.
Please note one case where this change may affect your workflow:
When an Abaca user reserves a resource from SLICES-FR (a non-production resource), they must explicitly specify they want to use the SLICES-FR queue, which is called "default", by adding `-q default` the OAR command.
-- Abaca Grid'5000 Team 10:10, 31 March 2025 (CEST)
Grid'5000 sites
Current funding
INRIA |
CNRS |
UniversitiesIMT Atlantique |
Regional councilsAquitaine |