Difference between revisions of "Grid5000:Home"

From Grid5000
Jump to: navigation, search
Line 7: Line 7:
  
 
Key features:
 
Key features:
* provides '''access to a large amount of resources''': 1000 nodes, 8000 cores, grouped in homogeneous clusters, and featuring various technologies: 10G Ethernet, Infiniband, Omnipath, GPUs, Xeon PHI
+
* provides '''access to a large amount of resources''': 12000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: GPU, SSD, NVMe, 10G Ethernet, Infiniband, Omnipath
 
* '''highly reconfigurable and controllable''': researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
 
* '''highly reconfigurable and controllable''': researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
 
* '''advanced monitoring and measurement features for traces collection of networking and power consumption''', providing a deep understanding of experiments
 
* '''advanced monitoring and measurement features for traces collection of networking and power consumption''', providing a deep understanding of experiments

Revision as of 09:23, 18 September 2018

Grid'5000

Grid'5000 is a large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data.

Key features:

  • provides access to a large amount of resources: 12000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: GPU, SSD, NVMe, 10G Ethernet, Infiniband, Omnipath
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2020-05-29 14:40): 1 current events, 1 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2035 overall):

  • Baptiste Jonglez, Sinan Birbalta, Martin Heusse. Persistent DNS connections for improved performance. NETWORKING 2019 - IFIP Networking 2019, May 2019, Warsaw, Poland. pp.1. hal-02149975 view on HAL pdf
  • Luke Bertot. Improving the simulation of IaaS Clouds. Data Structures and Algorithms cs.DS. Université de Strasbourg, 2019. English. NNT : 2019STRAD008. tel-02161866v2 view on HAL pdf
  • Jad Darrous. Scalable and Efficient Data Management in Distributed Clouds: Service Provisioning and Data Processing. Computer Science cs. Ecole normale supérieure de lyon - ENS LYON, 2019. English. tel-02501316 view on HAL pdf
  • Patrick Valduriez, Marta Mattoso, Reza Akbarinia, Heraldo Borges, José Camata, et al.. Scientific Data Analysis Using Data-Intensive Scalable Computing: the SciDISC Project. LADaS: Latin America Data Science Workshop, Aug 2018, Rio de Janeiro, Brazil. lirmm-01867804 view on HAL pdf
  • Pedro Bruel, Steven Quinito Masnada, Brice Videau, Arnaud Legrand, Jean-Marc Vincent, et al.. Autotuning under Tight Budget Constraints: A Transparent Design of Experiments Approach. 2018. hal-01953287 view on HAL pdf


Latest news

Rss.svgNew ARM cluster "Pyxis" available on Lyon (testing queue)

We have the pleasure to announce that a new cluster named "pyxis" is available in the testing queue in Lyon.

It is the first cluster with ARM CPU (ThunderX2 9980) in Grid'5000 ! This cluster is composed of 4 nodes, each of them with 2 ThunderX2 9980 CPU (32 cores/CPU, 4 threads/cores), 256 GB RAM, 2 x 250 GB HDD, 10 Gbps Ethernet. Each node's power consumption is monitored with the wattmetre of Lyon (although it is currently broken¹). It is planned to add Infiniband network to Pyxis.

Pyxis nodes can be reserved in the testing queue (add "-q testing -p cluster='pyxis'" to your OAR submission) and arm64 environments are available to be deployed on this cluster. Beware that as it is a different CPU architecture, compiled programs targeting x86 (such as those provided by module) won't execute.

Any feedback is welcome.

This cluster has been funded by the CPER LECO++ Project (FEDER, Région Auvergne-Rhone-Alpes, DRRT, Inria).

¹: https://intranet.grid5000.fr/bugzilla/show_bug.cgi?id=11784

-- Grid'5000 Team 11:20, May 14th 2020 (CEST)

Rss.svgUbuntu 20.04 image available

A kadeploy image (environment) for Ubuntu 20.04 (ubuntu2004-x64-min) is now available and registered in all sites with Kadeploy, along with other supported environments (Centos 7, Centos 8, Ubuntu 18.04, Debian testing, and various Debian 9 and Debian 10 variants).

This image is built with Kameleon (just like other Grid'5000 environments). The recipe is available in the environments-recipes git repository.

If you need other system images for your work, please let us know.

-- Grid'5000 Team 14:00, April 27th 2020 (CEST)

Rss.svgSingularity containers in Grid'5000

We now offer a better support for Singularity containers Grid'5000. It is available in the standard environment and does not require to be root.

Just run the "singularity" command to use it. It can also be run in a OAR submission (none-interactive batch job), for instance:

oarsub -l core=1 "/grid5000/code/bin/singularity run library://sylabsed/examples/lolcow"

More information about the Singularity usage in Grid'5000 is available in the Singularity page.

Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance network in containers and is compatible with docker images. More info at: https://sylabs.io/docs/

-- Grid'5000 Team 10:07, April 23nd 2020 (CET)

Rss.svgImportant change in the User Management System - Groups which Grant Access to the testbed

An important update to the Grid'5000 User Management System just happened. This update brings a new concept: users now get granted access to the testbed through a group membership.

These Groups which Grant Access (GGAs) allow a dispatch of the management and reporting tasks for the usage of the platform to closer managers than the Grid’5000 site managers.

Every user has to be a member of a GGA to be allowed access to the platform. The memberships are currently being worked by the staff, the site managers and new GGA managers.

You may receive emails about moves regarding your account: Don't worry. The transition to GGA should not impact your use of the platform and experimentations.

As a reminder, if you encounter problems or have questions, please report them either on the users mailing list or to the support staff, as described in the Support page. More information about this change is available in the User Management Service documentation page.

-- Grid'5000 Team 11:07, April 22nd 2020 (CET)


Read more news

Grid'5000 sites

Current funding

As from June 2008, Inria is the main contributor to Grid'5000 funding.

INRIA

Logo INRIA.gif

CNRS

CNRS-filaire-Quadri.png

Universities

Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon

Regional councils

Aquitaine
Bretagne
Champagne-Ardenne
Provence Alpes Côte d'Azur
Hauts de France
Lorraine