Grid5000:Home: Difference between revisions
No edit summary |
Lpouilloux (talk | contribs) No edit summary |
||
(72 intermediate revisions by 10 users not shown) | |||
Line 3: | Line 3: | ||
|- valign="top" | |- valign="top" | ||
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
[[Image: | [[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]] | ||
'''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.''' | |||
Key features: | |||
* provides '''access to a large amount of resources''': 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path | |||
* '''highly reconfigurable and controllable''': researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer | |||
* '''advanced monitoring and measurement features for traces collection of networking and power consumption''', providing a deep understanding of experiments | |||
* '''designed to support Open Science and reproducible research''', with full traceability of infrastructure and software changes on the testbed | |||
* '''a vibrant community''' of 500+ users supported by a solid technical team | |||
<br> | <br> | ||
Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]]. | |||
[[ | |||
| | |||
<b>Grid'5000 is merging with [https://fit-equipex.fr FIT] to build the [http://www.silecs.net/ SILECS Infrastructure for Large-scale Experimental Computer Science]. Read [http://www.silecs.net/wp-content/uploads/2018/04/Desprez-SILECS.pdf an Introduction to SILECS] (April 2018)</b> | |||
<br> | |||
Recently published documents and presentations: | |||
* | * [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019) | ||
* [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)] | |||
Older documents: | |||
* [https://www.grid5000.fr/slides/2014-09-24-Cluster2014-KeynoteFD-v2.pdf Slides from Frederic Desprez's keynote at IEEE CLUSTER 2014] | |||
* [https://www.grid5000.fr/ScientificCommittee/SAB%20report%20final%20short.pdf Report from the Grid'5000 Science Advisory Board (2014)] | |||
<br> | |||
Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL [[Hemera|HEMERA]] (2010-2014). | |||
|} | |||
and | |||
[[ | |||
<br> | <br> | ||
{{#status:0|0|0|http://bugzilla.grid5000.fr/status/upcoming.json}} | |||
<br> | |||
== Random pick of publications == | |||
{{#publications:}} | |||
==Latest news== | |||
<rss max=4 item-max-length="2000">https://www.grid5000.fr/rss/G5KNews.php</rss> | |||
---- | |||
[[News|Read more news]] | |||
=== | === Grid'5000 sites=== | ||
{|width=" | {|width="100%" cellspacing="3" | ||
|- valign="top" | |- valign="top" | ||
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
* [[Grenoble:Home|Grenoble]] | * [[Grenoble:Home|Grenoble]] | ||
* [[Lille:Home|Lille]] | * [[Lille:Home|Lille]] | ||
Line 61: | Line 53: | ||
* [[Lyon:Home|Lyon]] | * [[Lyon:Home|Lyon]] | ||
* [[Nancy:Home|Nancy]] | * [[Nancy:Home|Nancy]] | ||
* [[Nantes:Home|Nantes | * [[Nantes:Home|Nantes]] | ||
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | |width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
* [[Rennes:Home|Rennes]] | * [[Rennes:Home|Rennes]] | ||
Line 69: | Line 60: | ||
|- | |- | ||
|} | |} | ||
== Current funding == | == Current funding == | ||
As from June 2008, | As from June 2008, Inria is the main contributor to [[Grid5000:Funding|Grid'5000 funding]]. | ||
{|width="100%" cellspacing="3" | {|width="100%" cellspacing="3" | ||
|- | |- | ||
Line 108: | Line 70: | ||
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | | width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
===CNRS=== | ===CNRS=== | ||
[[Image:CNRS-filaire- | [[Image:CNRS-filaire-Quadri.png|125px]] | ||
|- | |- | ||
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | | width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
===Universities=== | ===Universities=== | ||
IMT Atlantique<br/> | |||
Université Grenoble Alpes, Grenoble INP<br/> | |||
Université Rennes 1, Rennes<br/> | |||
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse<br/> | Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse<br/> | ||
Université Bordeaux 1, Bordeaux<br/> | |||
Université Lille 1, Lille<br/> | |||
École Normale Supérieure, Lyon<br/> | |||
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | | width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"| | ||
===Regional councils=== | ===Regional councils=== | ||
Aquitaine<br/> | Aquitaine<br/> | ||
Auvergne-Rhône-Alpes<br/> | |||
Bretagne<br/> | Bretagne<br/> | ||
Champagne-Ardenne<br/> | Champagne-Ardenne<br/> | ||
Provence Alpes Côte d'Azur<br/> | Provence Alpes Côte d'Azur<br/> | ||
Hauts de France<br/> | |||
Lorraine<br/> | Lorraine<br/> | ||
|} | |} |
Latest revision as of 10:29, 26 October 2023
Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI. Key features:
Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)
Older documents:
|
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2758 overall):
- Cherif Latreche, Nikos Parlavantzas, Hector A Duran-Limon. FoRLess: A Deep Reinforcement Learning-based approach for FaaS Placement in Fog. UCC 2024 - 17th IEEE/ACM International Conference on Utility and Cloud Computing, Dec 2024, Sharjah, United Arab Emirates. pp.1-9. hal-04791252 view on HAL pdf
- Kouds Halitim. Enhancing Efficiency through Control theory in Compute-Intensive Applications. Computer Science cs. 2023. hal-04357812 view on HAL pdf
- Maxime Agusti, Eddy Caron, Benjamin Fichel, Laurent Lefèvre, Olivier Nicol, et al.. PowerHeat: A non-intrusive approach for estimating the power consumption of bare metal water-cooled servers. 2024 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics, Aug 2024, Copenhagen, Denmark. pp.1-7. hal-04662683 view on HAL pdf
- Wedan Emmanuel Gnibga, Anne Blavette, Anne-Cécile Orgerie. Latency, Energy and Carbon Aware Collaborative Resource Allocation with Consolidation and QoS Degradation Strategies in Edge Computing. ICPADS 2023 - IEEE International Conference on Parallel and Distributed Systems, Dec 2023, Hainan, China. pp.1-10, 10.1109/ICPADS60453.2023.00349. hal-04275783 view on HAL pdf
- Abdelghani Alidra, Hugo Bruneliere, Hélène Coullon, Thomas Ledoux, Charles Prud'Homme, et al.. SeMaFoR - Self-Management of Fog Resources with Collaborative Decentralized Controllers. SEAMS 2023: IEEE/ACM 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems, May 2023, Melbourne, Australia. pp.25-31, 10.1109/SEAMS59076.2023.00014. hal-04043471 view on HAL pdf
Latest news
Cluster chirop is now in the default queue of Lille with energy monitoring.
Dear users,
We are pleased to announce that the Chirop[1] cluster of Lille is now available in the default queue.
This cluster consists of 5 HPE DL360 Gen10+ nodes with:
Energy monitoring[2] is also available for this cluster[3], provided by newly installed Wattmetres (similar to those already available at Lyon).
This cluster was funded by CPER CornelIA.
[1] https://www.grid5000.fr/w/Lille:Hardware#chirop
[2] https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial [3] https://www.grid5000.fr/w/Monitoring_Using_Kwollect#Metrics_available_in_Grid.275000
-- Grid'5000 Team 16:25, 05 May 2025 (CEST)
Change of default queue based on platform
Until now, Abaca (production) users had to specify `-q production` when reserving Abaca resources with OAR.
This is no longer necessary as your default queue is now automatically selected based on the platform your default group is associated to, as shown at https://api.grid5000.fr/explorer/selector/ and in the message displayed when connecting to a frontend.
For SLICES-FR users, there is no change since the correct queue was already selected by default.
Additionally, the "production" queue has been renamed to "abaca", although "production" will continue to work for the foreseeable future.
Please note one case where this change may affect your workflow:
When an Abaca user reserves a resource from SLICES-FR (a non-production resource), they must explicitly specify they want to use the SLICES-FR queue, which is called "default", by adding `-q default` the OAR command.
-- Abaca Grid'5000 Team 10:10, 31 March 2025 (CEST)
Cluster "musa" with Nvidia H100 GPUs is available in production queue
We are pleased to announce that a new cluster named "musa" is available in the production queue¹ of Abaca.
This cluster has been funded by Inria DSI as a shared computing resource.
It is accessible to all Abaca users. Users affiliated with Inria have access with the same level of priority, regardless of the research center to which they are attached.
This cluster is composed of six HPE Proliant DL385 Gen11 nodes² with 2 AMD EPYC 9254 24-Core Processor, 512 GiB of RAM, 2 x Nvidia H100 NVL (94 GiB) with NVLink, one 6 TB SSD NVME and 25 Gbps Ethernet Connexion
Please note that in order to share it efficiently, walltime is limited:
The cluster "musa" is located at Sophia, hosted in the datacenter of Inria Centre at Université Côte d’Azur.
¹: https://api.grid5000.fr/explorer/hardware/sophia/#musa
²: the nodes are named musa-1, musa-2,.., musa-6
-- Grid'5000 Team 13:30, 19 March 2025 (CEST)
Cluster "Hydra" is now in the testing queue in Lyon
We are pleased to announce that the hydra[1] cluster of Lyon is now available in the testing queue.
Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].
Each node features:
Due to its bleeding edge hardware, usual Grid'5000 environments are not supported by default for this cluster.
(Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian11, but **does not provide functional GPU**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to official Nvidia image provided for this machine and provides GPU support.
To submit a job on this cluster, the following command may be used:
oarsub -q testing -t exotic -p hydra
This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.
[1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)
[2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/
-- Grid'5000 Team 16:10, 11 March 2025 (CEST)
Grid'5000 sites
Current funding
As from June 2008, Inria is the main contributor to Grid'5000 funding.
INRIA |
CNRS |
UniversitiesIMT Atlantique |
Regional councilsAquitaine |