Grid'5000 is a large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data.
- provides access to a large amount of resources: 12000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
- highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
- advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
- designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
- a vibrant community of 500+ users supported by a solid technical team
Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.
Recently published documents and presentations:
Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).
Current status (at 2019-08-18 08:52)
: 1 current events, None planned (details)
Random pick of publications
Five random publications that benefited from Grid'5000 (at least 2033 overall):
- Houssem-Eddine Chihoub, Christine Collet. iBig Hybrid Architecture for Energy IoT: When the power of Indexing meets Big Data Processing!. The IEEE International Conference on Cloud Computing Technology & Science 2017, Dec 2017, Hong Kong, Hong Kong SAR China. hal-01705607 view on HAL pdf
- Yewan Wang, David Nörtershäuser, Stéphane Le Masson, Jean-Marc Menaud. Potential effects on server power metering and modeling. CloudComp 2018 - 8th EAI International Conference on Cloud Computing, Sep 2018, Exeter, United Kingdom. pp.1-12. hal-01869705 view on HAL pdf
- Rafael Keller Tesser, Lucas Mello Schnorr, Arnaud Legrand, Fabrice Dupros, Philippe Navaux. Using Simulation to Evaluate and Tune the Performance of Dynamic Load Balancing of an Over-decomposed Geophysics Application. Euro-Par 2017: 23rd International European Conference on Parallel and Distributed Computing, Aug 2017, Santiago de Compostela, Spain. pp.15. hal-01567792 view on HAL pdf
- Laurent Prosperi, Alexandru Costan, Pedro Silva, Gabriel Antoniu. Planner: Cost-efficient Execution Plans Placement for Uniform Stream Analytics on Edge and Cloud. WORKS 2018: 13th Workflows in Support of Large-Scale Science Workshop, held in conjunction with the IEEE/ACM SC18 conference, Nov 2018, Dallas, United States. pp.1-10. hal-01892718 view on HAL pdf
- Hardik Soni, Walid Dabbous, Thierry Turletti, Hitoshi Asaeda. NFV-based Scalable Guaranteed-Bandwidth Multicast Service for Software Defined ISP networks. IEEE Transactions on Network and Service Management, IEEE, 2017, 14 (4), pp.14. 10.1109/TNSM.2017.2759167. hal-01596488 view on HAL pdf
A new version of tgz-g5k has been released
We have released a new version of tgz-g5k. Tgz-g5k is a a tool that allow you to extract a Grid'5000 environment tarball from a running node. The tarball can therefore be used by kadeploy to re-deploy the image on different nodes/reservations (see Advanced Kadeploy for more details)
The new version has two major improvements:
- tgz-g5k is now compatible with Ubuntu and Centos
- tgz-g5k is directly usable on frontends (you do not need to use it through ssh anymore).
To use tgz-g5k from a frontend, you can execute the following command:
frontend$ tgz-g5k -m MY_NODE -f ~/MY_TARBALL.tgz
In case of specific or non-deployed environments:
- tgz-g5k can use a specific user id to access nodes, by using the parameter -u (by default tgz-g5k accesses nodes as root)
- tgz-g5k can access node with oarsh/oarcp instead of ssh/scp, by using the parameter -o (by default tgz-g5k uses ssh/scp)
Note that tg5-g5k is still compatible with the previous command line. For the record, you had to use previously the following command:
frontend$ ssh root@MY_NODE tgz-g5k > ~/MY_TARBALL.tgz
-- Grid'5000 Team 15:00, 07 August 2019 (CET)
Enabling GPU level resource reservation in OAR
We have now put in service the GPU level resource reservation in OAR in Grid'5000. OAR now allows one to reserve one (or some) of the GPUs of a server hosting many GPUs, letting the other GPUs of the server available for other jobs.
Only the reserved GPUs will be available for computing in the job, as reported by the nvidia-smi command for instance.
- To reserve one GPU in a site, one can now run: $ oarsub -l gpu=1 ...
- To reserve 2 GPUs on a host which possibly has more than 2 GPUs, one can run: $ oarsub -l host=1/gpu=2 ...
- To reserve whole nodes (servers) with all GPUs, one can still run: $ oarsub -l host=3 -p "gpu_count > 0"
- To reserve specific GPU models, one can use the "gpu_model" property in a filter: $ oarsub -l gpu=1 -p "gpu_model = 'Tesla V100'"
- One can also filter on the cluster name after looking at the hardware pages for the description of the clusters: $ oarsub -l gpu=1 -p "cluster = 'chifflet'"
Finally, please notice that the drawgantts offer options to display GPUs.
-- Grid'5000 Team 17:00, 10 July 2019 (CET)
Read more news
As from June 2008, Inria is the main contributor to Grid'5000 funding.
Université Grenoble Alpes, Grenoble INP
Université Rennes 1, Rennes
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
Université Bordeaux 1, Bordeaux
Université Lille 1, Lille
École Normale Supérieure, Lyon
Provence Alpes Côte d'Azur
Hauts de France