Grid5000:Home

From Grid5000
Revision as of 23:57, 11 February 2020 by Pneyron (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2021-01-26 16:09): 1 current events, 1 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 1729 overall):

  • Louis Béziaud, Tristan Allard, David Gross-Amblard. Lightweight Privacy-Preserving Task Assignment in Skill-Aware Crowdsourcing: Full Version. 2017. hal-01534682 view on HAL pdf
  • Brice Nédelec. Édition collaborative décentralisée dans les navigateurs. Informatique cs. Université de Nantes, 2016. Français. tel-01387581 view on HAL pdf
  • Karan Nathwani, Juan Morales-Cordovilla, Sunit Sivasankaran, Irina Illina, Emmanuel Vincent. An extended experimental investigation of DNN uncertainty propagation for noise robust ASR. 5th Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA 2017), Mar 2017, San Francisco, United States. 2017. hal-01446441 view on HAL pdf
  • Boris Mansencal, Jenny Benois-Pineau, Hervé Bredin, Alexandre Benoit, Nicolas Voiron, et al.. IRIM at TRECVID 2016: Instance Search. TRECVid workshop 2016, Nov 2016, Gaithersburg, Maryland, United States. 2016, TRECVid workshop proceedings. http://www-nlpir.nist.gov/projects/trecvid/. hal-01416953 view on HAL pdf
  • Aditya Arie Nugraha, Antoine Liutkus, Emmanuel Vincent. Multichannel Music Separation with Deep Neural Networks. European Signal Processing Conference (EUSIPCO), Aug 2016, Budapest, Hungary. pp.1748-1752, Proceedings of the 24th European Signal Processing Conference (EUSIPCO) http://www.eusipco2016.org/. hal-01334614v2 view on HAL pdf


Latest news

Rss.svgTroll and Gemini clusters are now exotic resources (change in the way to reserve them)

Clusters Troll on Grenoble and Gemini on Lyon are now considered exotic resources, and must be reserved using the exotic OAR job type.

When a cluster on Grid'5000 has a hardware specificity that makes it too different from a "standard" configuration, it is reservable only using the exotic OAR job type. There are 2 reasons for this:

  • It ensures that your experiment won't run on potentially incompatible hardware, unless you explicitly allow it. (for example, you don't want to get a aarch64 cluster if your experiment is built for x86)
  • By not allocating these resources to jobs by default, it makes them more easily available for users who are looking specifically for this kind of hardware.

There is an example of usage of the exotic job type in the getting started : https://www.grid5000.fr/w/Getting_Started#Selecting_specific_resources

You can see if a cluster is exotic in the reference API or on the Hardware page of the wiki : https://www.grid5000.fr/w/Hardware#Clusters

There are currently 4 clusters which needs the exotic job type to be reserved :

  • pyxis because it has a non-x86 CPU architecture (aarch64)
  • drac because it has a non-x86 CPU architecture (ppc64)
  • troll because it has PMEM ( https://www.grid5000.fr/w/PMEM )
  • gemini because it has 8 V100 GPU per node, and only 2 nodes

-- Grid'5000 Team 09:40, January 26th 2021 (CET)

Rss.svgNew Grid'5000 API's documentation and specification

Grid'5000 API's documentation has been updated. Before this update, the documentation contained both the specification and tutorials of the API (with some parts also present in the wiki).

To be more consistent, https://api.grid5000.fr/doc/ provides now only the specification (HTTP paths, parameters, payload, …). All tutorials were moved (along with being updated) to the Grid'5000's wiki.

The new API specification can be viewed with two tools: The first one allows to read the specification and find information ; the second one allows to discover the API thanks to a playground.

Please note that the specification may contain errors. Please report any of such errors to Support Staff.

-- Grid'5000 Team 14:30, January 11th 2021 (CET)

Rss.svgImportant changes in the privileges levels of users

Each Grid'5000 user is a member of at least one granting access group, which depends on their situation (location, laboratory, ...).

Each group is given a privilege level (bronze, silver, gold), depending on how the related organization is involved in Grid'5000's development and support.

Until now, however, these levels had no impact on how Grid'5000 could be used.

Starting from December 10th, 2020, each user will be granted different usages on the testbed depending on their privileges level. In particular:

  • While every level continues to give access to the Grid'5000 default queue (most of Grid'5000 resources) ;
  • Access to the production and besteffort queues will only be granted to silver and gold levels.

The complete description of each level of privileges is available here.

The privilege level of the groups a user is a member of is shown in the "group" tab of the management interface.

Note that if a user is a member of several groups, one is set as default and is implicitly used when submitting jobs.

But the "--project" OAR option can also set explicitly which group the job should use. For instance:

oarsub -I -q production --project=myothergroup

Do not hesitate to contact the Support Staff for any questions related to the privilege levels.

-- Grid'5000 Team 15:30, December 8st 2020 (CET)

Rss.svgReminder: Testing phase of the new monitoring service named Kwollect

As a reminder, the testing phase of Kwollect, the new monitoring solution for Grid'5000, is still ongoing.

Some new features are available since the last announcement :

  • Support for Prometheus metrics
  • Basic visualization dashboard
  • Fine-tuning of on-demand metrics
  • Ability to push your own metrics

See: Monitoring Using Kwollect

Do not hesitate to give us some feedback!

Kwollect is intended to replace the legacy monitoring systems, Kwapi and Ganglia, in the (hopefully) near future.

-- Grid'5000 Team 09:00, December 1st 2020 (CET)


Read more news

Grid'5000 sites

Current funding

As from June 2008, Inria is the main contributor to Grid'5000 funding.

INRIA

Logo INRIA.gif

CNRS

CNRS-filaire-Quadri.png

Universities

Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Aquitaine
Auvergne-Rhône-Alpes
Bretagne
Champagne-Ardenne
Provence Alpes Côte d'Azur
Hauts de France
Lorraine