Grid5000:Home

From Grid5000
Revision as of 14:00, 29 May 2017 by Pneyron (talk | contribs)

Jump to: navigation, search
Grid'5000

Grid'5000 is a large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data.

Key features:

  • provides access to a large amount of resources: 1000 nodes, 8000 cores, grouped in homogeneous clusters, and featuring various technologies: 10G Ethernet, Infiniband, GPUs, Xeon PHI
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.


Recently published documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2019-01-22 23:49): No current events, None planned (details)

Latest news

Rss.svgGrid'5000 is now part of the Fed4FIRE testbeds federation

In the context of the Fed4FIRE+ H2020 project, Grid'5000 joined the Fed4FIRE federation and is now listed as one of its testbeds.

There is still ongoing work in order to allow the use of Grid'5000 resources using Fed4FIRE API and tools.

For more information, see:

-- Grid'5000 Team 10:30, 10 January 2019 (CET)

Rss.svgNew clusters available in Grenoble

We have the pleasure to announce that the 2 new clusters in Grenoble are now fully operational:

  • dahu: 32x Dell PE C6420, 2 x Intel Xeon Gold 6130 (Skylake, 2.10GHz, 16 cores), 192 GiB RAM, 240+480 GB SSD + 4.0 TB HDD, 10 Gbps Ethernet + 100 Gbps Omni-Path
  • yeti: 4x Dell PE R940, 4 x Intel Xeon Gold 6130 (Skylake, 2.10GHz, 16 cores), 768 GiB RAM, 480 GB SSD + 2x 1.6 TB NVME SSD + 3x 2.0 TB HDD, 10 Gbps Ethernet + 100 Gbps Omni-Path

These nodes share a Omnipath network with 40 additionnal "dahu"s, operated by GRICAD (Mésocentre HPC of Univ. Grenoble-Alpes) and are equipped with high frequency power monitoring devices (same wattmeters as in Lyon). This equippement was mainly funded by the LECO CPER (FEDER, Région Auvergne-Rhone-Alpes, DRRT, Inria), and the COMUE Univ. Grenoble-Alpes.

-- Grid'5000 Team 15:00, 20 December 2018 (CET)

Rss.svgNew environnements available for testing: Centos 7, Ubuntu 18.04, Debian testing

Three new environments are now available, and registered on all sites with Kadeploy: Centos 7 (centos7-x64-min), Ubuntu 18.04 (ubuntu1804-x64-min), Debian testing (debiantesting-x64-min).

They are in a beta state: at this point, we welcome feedback from users about those environments (issues, missing features, etc.).

We also welcome feedback on other environments that would be useful for your experiments, and that are currently missing from the set of environments provided on Grid'5000.

Those environments are built with Kameleon <http://kameleon.imag.fr/>. Recipes are available in the 'newrecipes' branch of the environments-recipes git repository:

https://github.com/grid5000/environments-recipes/tree/newrecipes

Our testing indicates that those environments work fine on all clusters, except in those cases:

  • Debian testing on granduc (luxembourg) and sagittaire (lyon): deployment fails due to lack of entropy after boot (related to the fix for CVE-2018-1108)
  • Debian testing on chifflot (lille): deployment fails because the predictable naming of network interfaces changed

-- Grid'5000 Team 15:28, 19 December 2018 (CET)

Rss.svgNew clusters available in Lille

We have the pleasure to announce that 2 new clusters are available in Lille:

  • chifflot : 8 Dell PE R740 nodes with 2 x Intel Xeon Gold 6126 12C/24T, 192GB DDR4, 2 x 447 GB SSD + 4 x 3.639 TB SAS including
    • chifflot-[1-6] nodes with 2 Nvidia P100
    • chifflot-[7-8] nodes with 2 Nvidia V100
  • chiclet : 8 Dell PE R7425 nodes with 2 x AMD EPYC 7301 16C/32T, 128GB DDR4

These nodes are connected with 25Gb Ethernet to a Cisco Nexus9000 switch (reference: 93180YC-EX)

The extension of the hardware equipment at the Lille's site of Grid'5000 is part of the Data (Advanced data science and technologies) CPER project carried by Inria, with the support of the regional council of Hauts-de-France, FEDER and the State.

-- Grid'5000 Team 16:40, 12 November 2018 (CET)


Read more news

Grid'5000 sites

Current funding

As from June 2008, INRIA is the main contributor to Grid'5000 funding.

INRIA

Logo INRIA.gif

CNRS

CNRS-filaire-Quadri.png

Universities

Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Aquitaine
Bretagne
Champagne-Ardenne
Provence Alpes Côte d'Azur
Nord Pas de Calais
Lorraine