Grid5000:Home

From Grid5000
Revision as of 13:26, 2 August 2018 by Ddelabroye (talk | contribs)

Jump to: navigation, search
Grid'5000

Grid'5000 is a large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data.

Key features:

  • provides access to a large amount of resources: 1000 nodes, 8000 cores, grouped in homogeneous clusters, and featuring various technologies: 10G Ethernet, Infiniband, GPUs, Xeon PHI
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2018-12-16 02:46): No current events, 1 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 1729 overall):

  • Bo Zhang, Filip Křikava, Romain Rouvoy, Lionel Seinturier. Self-Balancing Job Parallelism and Throughput in Hadoop. Márk Jelasity; Evangelia Kalyvianaki. 16th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS), Jun 2016, Heraklion, Crete, Greece. Springer, Lecture Notes in Computer Science, LNCS-9687, pp.129-143, Distributed Applications and Interoperable Systems. http://2016.discotec.org. 10.1007/978-3-319-39577-7_11. hal-01294834 view on HAL pdf
  • David Beniamine. Analyzing the memory behavior of parallel scientific applications. Distributed, Parallel, and Cluster Computing cs.DC. Université Grenoble Alpes, 2016. English. NNT : 2016GREAM088. tel-01681008v2 view on HAL pdf
  • Orcun Yildiz. Efficient Big Data Processing on Large-Scale Shared Platforms ˸ managing I/Os and Failure. Performance cs.PF. École normale supérieure de Rennes, 2017. English. NNT : 2017ENSR0009. tel-01723850 view on HAL pdf
  • Cédric Tedeschi. Distributed Chemically-Inspired Runtimes for Large Scale Adaptive Computing Platforms. Distributed, Parallel, and Cluster Computing cs.DC. Université de Rennes 1, 2017. tel-01665776 view on HAL pdf
  • Mehdi Zitouni, Reza Akbarinia, Sadok Ben Yahia, Florent Masseglia. Massively Distributed Environments and Closed Itemset Mining: The DCIM Approach. BDA: Conférence sur la Gestion de Données — Principes, Technologies et Applications, Nov 2017, Nancy, France. 33ème Conférence sur la Gestion de Données — Principes, Technologies et Applications, 4, pp.1-15, 2017, 10.1145/1837934.1837995. lirmm-01620354 view on HAL pdf


Latest news

Rss.svgNew clusters available in Lille

We have the pleasure to announce that 2 new clusters are available in Lille:

  • chifflot : 8 Dell PE R740 nodes with 2 x Intel Xeon Gold 6126 12C/24T, 192GB DDR4, 2 x 447 GB SSD + 4 x 3.639 TB SAS including
    • chifflot-[1-6] nodes with 2 Nvidia P100
    • chifflot-[7-8] nodes with 2 Nvidia V100
  • chiclet : 8 Dell PE R7425 nodes with 2 x AMD EPYC 7301 16C/32T, 128GB DDR4

These nodes are connected with 25Gb Ethernet to a Cisco Nexus9000 switch (reference: 93180YC-EX)

The extension of the hardware equipment at the Lille's site of Grid'5000 is part of the Data (Advanced data science and technologies) CPER project carried by Inria, with the support of the regional council of Hauts-de-France, FEDER and the State.

-- Grid'5000 Team 16:40, 12 November 2018 (CET)

Rss.svg100 Gbps Omni-Path network now available

Omni-Path networking is now available on Nancy's grele and grimani clusters.

On the software side, support is provided in Grid'5000 environments using packages from the Scibian distribution[0], a Debian-based distribution for high-performance computing started by EDF.

OpenMPI automatically detects and uses Omni-Path when available. To learn more about how to use it, refer to the Run MPI on Grid'5000 tutorial[1].

More new clusters with Omni-Path networks are in the final stages of installation. Stay tuned for updates!

[0] http://www.scibian.org

[1] https://www.grid5000.fr/mediawiki/index.php/Run_MPI_On_Grid%275000

-- Grid'5000 Team 15:10, 25 September 2018 (CET)

Rss.svgChange in user's home access

During this week, we will change the access policy of home directories in order to improve the security and the privacy of data stored in your home directory.

This change should be transparent to most users, as:

  • You will still have access to your local user home from reserved nodes when using the standard environment or when deploying custom environments such as 'nfs' or 'big'.
  • You can still access every user's home from frontends (provided the access permissions allow it).

However, in other situations (give access to your home to other users, mount home directory from another site or inside a virtual machine or from a VLAN, ...), you will need to explicitely allow access using the new Storage Manager API. See https://www.grid5000.fr/w/Storage_Manager

Note that this change requires a switch to autofs to mount /home in user environments. This should be transparent in most cases because g5k-postinstall (used by all reference environments) has been modified. However, if you use an old environment, it might require a change either to switch to g5k-postinstall, or to switch to autofs. Contact us if needed.

-- Grid'5000 Team 14:40, 4 September 2018 (CET)

Rss.svgNew cluster in Nantes: ecotype

We have the pleasure to announce a new cluster called "ecotype" hosted at the IMT Atlantique campus located in Nantes. It features 48 Dell Powerdege R630 nodes with 2 Intel Xeon E5-2630L v4, 10C/20T, 128GB DDR4, 372GB SSD and 10Gbps Ethernet.

This cluster has been funded by the CPER SeDuCe (Regional concil of the Pays de la Loire, Nantes Metropole, Inria Rennes - Bretagne Atlantique, IMT Atlantique and the French government).

-- Grid'5000 Team 16:30, 29 August 2018 (CET)


Read more news

Grid'5000 sites

Current funding

As from June 2008, Inria is the main contributor to Grid'5000 funding.

INRIA

Logo INRIA.gif

CNRS

CNRS-filaire-Quadri.png

Universities

Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Aquitaine
Bretagne
Champagne-Ardenne
Provence Alpes Côte d'Azur
Hauts de France
Lorraine