Grid5000:Home

From Grid5000
Revision as of 09:44, 24 May 2019 by Lnussbaum (talk | contribs)

Jump to: navigation, search
Grid'5000

Grid'5000 is a large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data.

Key features:

  • provides access to a large amount of resources: 12000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2020-02-19 12:19): No current events, None planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2139 overall):

  • Nouredine Melab, Jan Gmys, Mohand Mezmaz, Daniel Tuyttens. Many-core Branch-and-Bound for GPU accelerators and MIC coprocessors. T. Bartz-Beielstein; B. Filipic; P. Korosec; E-G. Talbi. High-Performance Simulation-Based Optimization, 833, Springer, pp.16, 2019, Studies in Computational Intelligence, ISBN 978-3-030-18763-7. hal-01924766 view on HAL pdf
  • Lucas Nussbaum. An overview of Fed4FIRE testbeds -- and beyond?. GEFI - Global Experimentation for Future Internet Workshop, Nov 2019, Coimbra, Portugal. hal-02401738 view on HAL pdf
  • Rafael Ferreira da Silva, Anne-Cécile Orgerie, Henri Casanova, Ryan Tanaka, Ewa Deelman, et al.. Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows. ICCS 2019 - International Conference on Computational Science, Jun 2019, Faro, Portugal. pp.138-152, 10.1007/978-3-030-22734-0_11. hal-02112893 view on HAL pdf
  • Nathalie Bertrand, Igor Konnov, Marijana Lazic, Josef Widder. Verification of Randomized Distributed Algorithms under Round-Rigid Adversaries. 2019. hal-01925533v3 view on HAL pdf
  • Jasmin Blanchette, Daniel Ouraoui, Pascal Fontaine, Cezary Kaliszyk. Machine Learning for Instance Selection in SMT Solving. Conference on Artificial Intelligence and Theorem Proving (AITP 2019), Apr 2019, Obergurgl, Austria. hal-02381430 view on HAL pdf


Latest news

Rss.svgNew cluster "troll" available in Grenoble

We have the pleasure to announce that a new cluster called "troll" is available in Grenoble¹.

It features 4 Dell R640 nodes with 2 Intel® Xeon® Gold 5218, 16 cores/CPU, 384GB DDR4, 1.5 TB PMEM (Intel® Optane™ DC Persistent Memory)²³, 1.6 TB NVME SSD, 10Gbps Ethernet, and 100Gb Omni-Path.

Energy monitoring⁴ is available for this cluster, provided by the same devices used for the other clusters in Grenoble.

This cluster has been funded by the PERM@RAM project from Laboratoire d'Informatique de Grenoble (CNRS/INS2I grant).


¹: https://www.grid5000.fr/w/Grenoble:Hardware

²: https://software.intel.com/en-us/articles/quick-start-guide-configure-intel-optane-dc-persistent-memory-on-linux

³: https://docs.pmem.io/persistent-memory/

⁴: https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial

-- Grid'5000 Team 17:00, February 3rd 2020 (CET)


Rss.svgNew cluster available in Nancy: grue (20 GPUs)

We have the pleasure to announce that the Grue cluster in Nancy¹ (production queue) is now available:

It features 5 Dell R7425 servers nodes with four Tesla T4², 128 GB DDR4, 1x480 GB SSD, 2 x AMD EPYC 7351, 16 cores/CPU

As this cluster features 4 GPU per node, we remind you that you can monitor GPU (and node) usage using the Ganglia tool (std environment only):

If your experiments do not require all the GPU of a single node, it is possible to reserve GPU³ at the resource level (see https://grid5000.fr/w/News#Enabling_GPU_level_resource_reservation_in_OAR for some examples). You can also use the nvidia-smi and htop commands on your reserved nodes to get more information about your GPU/CPU usage.

This cluster has been funded by Ihe CPER LCHN project (Langues, Connaissances & Humanités Numériques, Contrat de plan État / Région Lorraine 2015-2020), and the LARSEN and MULTISPEECH teams at LORIA / Inria Nancy Grand Est.

As a reminder, since this cluster is part of the "production" queue, specific usage rules apply.


¹: https://www.grid5000.fr/w/Hardware

²: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-t4/t4-tensor-core-d...

Rss.svgGrid'5000 users survey

We are conducting a survey to help us better understand your needs and make Grid'5000 a better research infrastructure.

We thank you in advance for taking a few minutes to complete it (you can answer in French if you prefer).

The survey is available at:

https://sondages.inria.fr/index.php/672895

It will be open until December, 13rd.

-- Grid'5000 Team 15:00, November 26th 2019 (CET)

Rss.svgNew cluster "gemini" available at Lyon

We have the pleasure to announce you the availability of the new cluster "gemini" at Lyon.

Gemini includes two "Nvidia DGX-1" nodes, each with 8 Nvidia V100 GPUs, 2 Intel Xeon E5-2698 v4 @ 2.20GHz CPUs, 512GB DDR4, Infiniband EDR and 10Gbps Ethernet interfaces and 4 reservable¹ SSD disks.

Energy monitoring is also available for this cluster, provided by the same devices used for the other clusters in Lyon².

Remember that if you don't need the 8 GPUs, individual GPU may be reserved³. A script to install nvidia-docker is also available if you want to use Nvidia's images built for Docker⁴.

This cluster has been funded by the CPER LECO++ Project (FEDER, Région Auvergne-Rhone-Alpes, DRRT, Inria).

¹: https://www.grid5000.fr/w/Disk_reservation

²: https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial

³: https://www.grid5000.fr/w/Accelerators_on_Grid5000#Reserving_GPU_units_on_nodes_with_many_GPUs

⁴: https://www.grid5000.fr/w/Docker#Nvidia-docker

-- Grid'5000 Team 15:00, November 12th 2019 (CET)


Read more news

Grid'5000 sites

Current funding

As from June 2008, Inria is the main contributor to Grid'5000 funding.

INRIA

Logo INRIA.gif

CNRS

CNRS-filaire-Quadri.png

Universities

Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Aquitaine
Auvergne-Rhône-Alpes
Bretagne
Champagne-Ardenne
Provence Alpes Côte d'Azur
Hauts de France
Lorraine