Grid5000:Home

From Grid5000
Jump to: navigation, search
Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2020-11-25 14:21): 3 current events, 3 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2057 overall):

  • Alexandru Costan. From Big Data to Fast Data: Efficient Stream Data Management. Distributed, Parallel, and Cluster Computing cs.DC. ENS Rennes, 2019. tel-02059437v2 view on HAL pdf
  • Pedro Bruel, Steven Quinito Masnada, Brice Videau, Arnaud Legrand, Jean-Marc Vincent, et al.. Autotuning under Tight Budget Constraints: A Transparent Design of Experiments Approach. 2018. hal-01953287 view on HAL pdf
  • Tiago Carneiro, Nouredine Melab. An Incremental Parallel PGAS-based Tree Search Algorithm. HPCS 2019 - International Conference on High Performance Computing & Simulation, Jul 2019, Dublin, Ireland. hal-02170842 view on HAL pdf
  • Jean Luca Bez, Francieli Zanon Boito, Ramon Nou, Alberto Miranda, Toni Cortes, et al.. Detecting I/O Access Patterns of HPC Workloads at Runtime. SBAC-PAD 2019 - International Symposium on Computer Architecture and High Performance Computing, Oct 2019, Campo Grande, Brazil. hal-02276191 view on HAL pdf
  • Alejandro Z. Tomsic, Manuel Bravo, Marc Shapiro. Distributed transactional reads: the strong, the quick, the fresh & the impossible. 2018 ACM/IFIP/USENIX International Middleware Conference, ACM/IFIP/USENIX, Dec 2018, Rennes, France. pp.14, 10.1145/3274808.3274818. hal-01876456 view on HAL pdf


Latest news

Rss.svgOAR job container feature re-activated

We have the pleasure to announce that the OAR job container feature has been re-activated. It allows to execute inner jobs within the boundaries of a container job.

It can be used, for example, by a professor to reserve resources with a container job for a teaching lab, and let students run their own jobs inside that container.

More informations on job containers here

Please note that if a single user needs to run multiple tasks (e.g. SPMD) within a bigger resource reservation, it is preferable to use a tool such as GNU Parallel: GNU Parallel

-- Grid'5000 Team 15:40, November 19th 2020 (CET)

Rss.svgGrid'5000 global vlans and stitching now available through the Fed4FIRE federation

We announced in June that Grid'5000 nodes were now available through the Grid'5000 Aggregate Manager to users of the Fed4FIRE testbed Federation.

Fed4FIRE is the largest federation worldwide of Next Generation Internet (NGI) testbeds, which provide open, accessible and reliable facilities supporting a wide variety of different research and innovation communities and initiatives in Europe.

We now have the pleasure to announce that the Aggregate Manager now allows the reservation of Grid'5000 global-vlans and inter-testbed stitching throught the federation's jFed-Experiment application, and tools designed to access GENI testbeds.

Inter-testbed stitching allows users to link Grid'5000 global-vlans to external vlans linked to other testbeds in the federation. Using this technology user can setup experiments around wide-area l2 networks.

Grid’5000 users wanting to use the aggregate manager should link their Fed4FIRE account to their Grid’5000 one using the process described in the wiki.

-- Grid'5000 Team 10:00, November 16th 2020 (CET)

Rss.svgNew cluster grappe available in the production queue

We have the pleasure to announce that a new cluster named "grappe" is available at Nancy¹.

We chose that name to celebrate the famous alsatian wines, as the network part of grappe was wrongly delivered in Strasbourg. We interpreted that funny event as a kind of "birth origin".

The cluster features 16 Dell R640 nodes with 2 Intel® Xeon® Gold 5218R (20 cores/CPU), 96 GB of RAM, a 480 GB SSD + a 8.0 TB HDD reservable disk and 25 Gbps Ethernet.

Energy monitoring² is available for the cluster.

The cluster has been funded by the CPER IT2MP (Contrat Plan État Région, Innovations Technologiques, Modélisation & Médecine Personnalisée) and FEDER (Fonds européen de développement régional)³.

¹: https://www.grid5000.fr/w/Nancy:Hardware

²: https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial

³: https://www.loria.fr/fr/la-recherche/axes-scientifiques-transverses/projets-sante-numerique/

-- Grid'5000 Team 14:30, October 16th 2020 (CET)

Rss.svgARM64 cluster Pyxis available in the default queue

We have the pleasure to announce that the "pyxis" cluster in Lyon is now available in the default queue !

It is composed of 4 nodes with ARM64 CPUs (ThunderX2 9980 with 2x32 cores), 256 GB RAM, 2 x 250 GB HDD, 10 Gbps Ethernet, 100 Gbps Infiniband interface. Each node power consumption is monitored with the Lyon wattmetre.

Pyxis nodes must be reserved using the "exotic" oar job type (add "-t exotic -p cluster='pyxis' " to your OAR submission).

Several arm64 environments are available to be deployed on this cluster: https://www.grid5000.fr/w/Advanced_Kadeploy#Search_and_deploy_an_existing_environment

This cluster has been funded by the CPER LECO++ Project (FEDER, Région Auvergne-Rhone-Alpes, DRRT, Inria).

-- Grid'5000 Team 10:50, September 30th 2020 (CEST)


Read more news

Grid'5000 sites

Current funding

As from June 2008, Inria is the main contributor to Grid'5000 funding.

INRIA

Logo INRIA.gif

CNRS

CNRS-filaire-Quadri.png

Universities

Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Aquitaine
Auvergne-Rhône-Alpes
Bretagne
Champagne-Ardenne
Provence Alpes Côte d'Azur
Hauts de France
Lorraine