Grid5000:Home: Difference between revisions
Lpouilloux (talk | contribs) No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
__NOTOC__ __NOEDITSECTION__ | __NOTOC__ __NOEDITSECTION__ | ||
{|width="95%" | |||
|- valign="top" | |||
|bgcolor="#888888" style="border:1px solid #cccccc;padding:2em;padding-top:1em;"| | |||
[[File:Slices-ri-white-color.png|260px|left]] | |||
<b>Grid'5000 is a precursor infrastructure of [http://www.slices-ri.eu SLICES-RI], Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.</b> | |||
<br/> | |||
Content on this website is partly outdated. Technical information stays relevant. | |||
|} | |||
{|width="95%" | {|width="95%" | ||
|- valign="top" | |- valign="top" | ||
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
[[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]] | [[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]] | ||
'''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC | '''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI.''' | ||
Key features: | Key features: | ||
Line 15: | Line 24: | ||
<br> | <br> | ||
Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]]. | Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]]. | ||
<br> | <br> | ||
Published documents and presentations: | |||
* [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019) | * [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019) | ||
* [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)] | * [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)] | ||
Line 50: | Line 57: | ||
* [[Lille:Home|Lille]] | * [[Lille:Home|Lille]] | ||
* [[Luxembourg:Home|Luxembourg]] | * [[Luxembourg:Home|Luxembourg]] | ||
* [[Louvain:Home|Louvain]] | |||
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
* [[Lyon:Home|Lyon]] | * [[Lyon:Home|Lyon]] | ||
* [[Nancy:Home|Nancy]] | * [[Nancy:Home|Nancy]] | ||
* [[Nantes:Home|Nantes]] | * [[Nantes:Home|Nantes]] | ||
* [[Rennes:Home|Rennes]] | |||
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
* [[Sophia:Home|Sophia-Antipolis]] | * [[Sophia:Home|Sophia-Antipolis]] | ||
* [[Strasbourg:Home|Strasbourg]] | |||
* [[Toulouse:Home|Toulouse]] | * [[Toulouse:Home|Toulouse]] | ||
|- | |- | ||
Line 62: | Line 71: | ||
== Current funding == | == Current funding == | ||
{|width="100%" cellspacing="3" | {|width="100%" cellspacing="3" | ||
|- | |- |
Latest revision as of 00:37, 7 June 2025
Grid'5000 is a precursor infrastructure of SLICES-RI, Scientific Large Scale Infrastructure for Computing/Communication Experimental Studies.
|
Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing, including Cloud, HPC, Big Data and AI. Key features:
Older documents:
|
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2758 overall):
- Diego Amaya-Ramirez. Data science approach for the exploration of HLA antigenicity based on 3D structures and molecular dynamics. Bioinformatics q-bio.QM. Université de Lorraine, 2024. English. NNT : 2024LORR0071. tel-04708399 view on HAL pdf
- Reda Khoufache, Anisse Belhadj, Hanene Azzag, Mustapha Lebbah. Distributed MCMC inference for Bayesian Non-Parametric Latent Block Model. Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), May 2024, Taipei, Taiwan. hal-04457575 view on HAL pdf
- Céline Acary-Robert, Emmanuel Agullo, Ludovic Courtès, Marek Felšöci, Konrad Hinsen, et al.. Guix-HPC Activity Report 2022–2023. Inria Bordeaux - Sud Ouest. 2024, pp.1-32. hal-04500140 view on HAL pdf
- Igor Fontana de Nardin, Patricia Stolf, Stéphane Caux. BEASY: Making EASY backfilling renewable-only. 35th IEEE International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2023), IEEE, Oct 2023, Porto Alegre, Brazil. pp.57-67, 10.1109/SBAC-PAD59825.2023.00015. hal-04206083 view on HAL pdf
- Gaël Vila, Emmanuel Medernach, Inés Gonzalez, Axel Bonnet, Yohan Chatelain, et al.. The Impact of Hardware Variability on Applications Packaged with Docker and Guix: a Case Study in Neuroimaging. ACM REP'24, ACM, Jun 2024, Rennes, France. pp.75-84, 10.1145/3641525.3663626. hal-04480308v2 view on HAL pdf
Latest news
Cluster "estats" (Jetson nodes in Toulouse) is now kavlan capable
The network topology of the estats Jetson nodes can now be configured, just like for other clusters.
More info in the Network reconfiguration tutorial.
-- Grid'5000 Team 18:25, 21 May 2025 (CEST)
Cluster "chirop" is now in the default queue of Lille with energy monitoring.
Dear users,
We are pleased to announce that the Chirop[1] cluster of Lille is now available in the default queue.
This cluster consists of 5 HPE DL360 Gen10+ nodes with:
Energy monitoring[2] is also available for this cluster[3], provided by newly installed Wattmetres (similar to those already available at Lyon).
This cluster was funded by CPER CornelIA.
[1] https://www.grid5000.fr/w/Lille:Hardware#chirop
[2] https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial [3] https://www.grid5000.fr/w/Monitoring_Using_Kwollect#Metrics_available_in_Grid.275000
-- Grid'5000 Team 16:25, 05 May 2025 (CEST)
Change of default queue based on platform
Until now, Abaca (production) users had to specify `-q production` when reserving Abaca resources with OAR.
This is no longer necessary as your default queue is now automatically selected based on the platform your default group is associated to, as shown at https://api.grid5000.fr/explorer/selector/ and in the message displayed when connecting to a frontend.
For SLICES-FR users, there is no change since the correct queue was already selected by default.
Additionally, the "production" queue has been renamed to "abaca", although "production" will continue to work for the foreseeable future.
Please note one case where this change may affect your workflow:
When an Abaca user reserves a resource from SLICES-FR (a non-production resource), they must explicitly specify they want to use the SLICES-FR queue, which is called "default", by adding `-q default` the OAR command.
-- Abaca Grid'5000 Team 10:10, 31 March 2025 (CEST)
Cluster "musa" with Nvidia H100 GPUs is available in production queue
We are pleased to announce that a new cluster named "musa" is available in the production queue¹ of Abaca.
This cluster has been funded by Inria DSI as a shared computing resource.
It is accessible to all Abaca users. Users affiliated with Inria have access with the same level of priority, regardless of the research center to which they are attached.
This cluster is composed of six HPE Proliant DL385 Gen11 nodes² with 2 AMD EPYC 9254 24-Core Processor, 512 GiB of RAM, 2 x Nvidia H100 NVL (94 GiB) with NVLink, one 6 TB SSD NVME and 25 Gbps Ethernet Connexion
Please note that in order to share it efficiently, walltime is limited:
The cluster "musa" is located at Sophia, hosted in the datacenter of Inria Centre at Université Côte d’Azur.
¹: https://api.grid5000.fr/explorer/hardware/sophia/#musa
²: the nodes are named musa-1, musa-2,.., musa-6
-- Grid'5000 Team 13:30, 19 March 2025 (CEST)
Grid'5000 sites
Current funding
INRIA |
CNRS |
UniversitiesIMT Atlantique |
Regional councilsAquitaine |