Grid5000:Home: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
 
(101 intermediate revisions by 12 users not shown)
Line 3: Line 3:
|- valign="top"
|- valign="top"
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
|bgcolor="#f5fff5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
[[Image:Logo.png|left]]
[[Image:g5k-backbone.png|thumbnail|260px|right|Grid'5000]]
'''Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.'''
 
Key features:
* provides '''access to a large amount of resources''': 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
* '''highly reconfigurable and controllable''': researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
* '''advanced monitoring and measurement features for traces collection of networking and power consumption''', providing a deep understanding of experiments
* '''designed to support Open Science and reproducible research''', with full traceability of infrastructure and software changes on the testbed
* '''a vibrant community''' of 500+ users supported by a solid technical team
 
<br>
Read more about our [[Team|teams]], our [[Publications|publications]], and the [[Grid5000:UsagePolicy|usage policy]] of the testbed. Then [[Grid5000:Get_an_account|get an account]], and learn how to use the testbed with our [[Getting_Started|Getting Started tutorial]] and the rest of our [[:Category:Portal:User|Users portal]].
 
<b>Grid'5000 is merging with [https://fit-equipex.fr FIT] to build the [http://www.silecs.net/ SILECS Infrastructure for Large-scale Experimental Computer Science]. Read [http://www.silecs.net/wp-content/uploads/2018/04/Desprez-SILECS.pdf an Introduction to SILECS] (April 2018)</b>
 
<br>
<br>
''a scientific instrument designed to support experiment-driven research in all areas of computer science related to parallel, large-scale or distributed computing and networking'' <br>
Recently published documents and presentations:
[[media:seminaire_intro.pdf|Download the latest general introduction]], or a [https://www.grid5000.fr/screencast/index.html screencast of recent webUI developments]
* [[Media:Grid5000.pdf|Presentation of Grid'5000]] (April 2019)
* [https://www.grid5000.fr/mediawiki/images/Grid5000_science-advisory-board_report_2018.pdf Report from the Grid'5000 Science Advisory Board (2018)]
 
Older documents:
* [https://www.grid5000.fr/slides/2014-09-24-Cluster2014-KeynoteFD-v2.pdf Slides from Frederic Desprez's keynote at IEEE CLUSTER 2014]
* [https://www.grid5000.fr/ScientificCommittee/SAB%20report%20final%20short.pdf Report from the Grid'5000 Science Advisory Board (2014)]
 
<br>
Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL [[Hemera|HEMERA]] (2010-2014).
|}
|}
{{#status:0|0|0|https://www.grid5000.fr/status/upcoming.json}}
== Latest updates from Grid'5000 users ==
* '''Experiments'''
{{#experiments:3|||**}}
* '''Publications'''
{{#publications:3||**}}
==Latest news==
[[Image:Bonfire-on-demand.png|left|120px]]
=== Using Grid'5000 for on-demand extension of an OpenNebula installation ===
[http://vimeo.com/39257324 This video] shows how the [http://www.bonfire-project.eu BonFIRE project]'s testbed operated in Rennes can be extended on-demand over Grid'5000 resources. This is a powerfull demonstration of [[KaVALN]] usage and of how Grid'5000 capabilities can be exposed to specific users with the [https://api.grid5000.fr Grid'5000 API]. Here Grid'5000 resources are dynamically added to an OpenNebula installation.
----
[[Image:SC11LogoReverse.png|left]]
=== Grid'5000 taking part in the [http://sc11.supercomputing.org/schedule/event_detail.php?evid=wksp121 '''Support for Experimental Computer Science Workshop'''] at SC11 ===
A few speakers with experience gained using Grid'5000 will be taking part in the [http://sc11.supercomputing.org/schedule/event_detail.php?evid=wksp121 ''Support for Experimental Computer Science Workshop''] at SC11. This should be of interest to all Grid'5000 users. Abstract reads : The ability to conduct consistent, controlled and repeatable large-scale experiments in all areas of computer science related to parallel, large-scale or distributed computing and networking is critical to the future and development of Computer Science. Yet conducting such experiments is still too often a challenge for researchers, students and practitioners due to unavailability of dedicated resources, inability to create controlled experimental conditions, and variability in software. Availability, repeatability, and open sharing of electronic products are all still a challenge. This workshop will bring together scientists involved in building and operating two infrastructures dedicated to supporting Computer Science experiments, Grid 5000 in France and Future Grid in the United States, to discuss challenges and solutions in this space. Our objectives are to share experiences and knowledge related to supporting large-scale experiments conducted on experimental infrastructures, solicit requirements, and discuss methodologies and opportunities created by emerging technologies.


The program of the workshop is available [http://graal.ens-lyon.fr/~desprez/SC11workshop.htm here].
<br>
----
{{#status:0|0|0|http://bugzilla.grid5000.fr/status/upcoming.json}}
=== Best poster award for the deployment of the gLite grid middleware on Grid'5000 ===
<br>
During the [http://france-grilles-2011.sciencesconf.org/ Rencontres Scientifiques France Grilles], Sébastien Badia and Lucas Nussbaum
 
received the best poster award for their work on the [http://hal.archives-ouvertes.fr/inria-00626038/en/ deployment of the gLite grid middleware on Grid'5000]. This work was done in the context of the [[Appel Interfaces Recherche en grilles/Grilles de production 2009]], co-funded by Institut des Grilles (CNRS) and ADT Aladdin (INRIA).
== Random pick of publications ==
{{#publications:}}


=== Best paper award at GECCO'2011===
==Latest news==
Grid'5000 users get a Best Paper Award at ACM GECCO'2011.
<rss max=4 item-max-length="2000">https://www.grid5000.fr/rss/G5KNews.php</rss>
Congratulations to Malika Mehdi and Jean-Claude Charr for their paper "A Cooperative Tree-based Hybrid GA-B&B
Approach for Solving Challenging Permutation-based Problems" co-authored with Nouredine Melab, El-Ghazali Talbi
and Pascal Bouvry.
----
----
=== GPU day at Lille ===
[[News|Read more news]]
A cluster with GPUs has been deployed in Lille since April 5th, 2011. A tutorial day in the context of Grid'5000 is therefore organized Tuesday June 28th, at INRIA Lille, to present Grid'5000 and to learn to use these new resources. Please refer to [http://www.lifl.fr/~derbel/gpu/ the details] I you wish to participate.
----
[[Image:SCCampLogo.png|left]]
=== Grid'5000 used as a learning platform during [http://www.sc-camp.org/ SC-Camp 2011] ===
[http://www.sc-camp.org SC-Camp] is an initiative of researchers to offer to undergraduate and master students state-of-the-art lectures and programming practical sessions upon High Performance and
Distributed Computing topics. In 2010 the event was in Bucaramanga, Colombia. In 2011 the event will be hosted by Universidad de Costa Rica, Sede del Atlántico en Turrialba. SC-Camp is a non-profit event, composed by 7 days starting on July the 10th of 2011. Of those days 6 are dedicated to scientific lectures, practical programming sessions and a parallel programming contest.
----
[[Image:Cheat_Sheet_mini.png|left|120px|]]
=== First [[Media:g5k_cheat_sheet.pdf|Cheat sheet created]] ===
If you are of those who enjoy a recap of the different commands and links to the main help pages, you'll be pleased to see that an admin has contributed the first [[Media:g5k_cheat_sheet.pdf|Grid'5000 cheat sheet]] to this wiki. If you wish to understand how it was built, you can read and suggest contributions in [https://www.grid5000.fr/cgi-bin/bugzilla3/show_bug.cgi?id=3679 the corresponding bug].
----
[[Image:IPDPS2011.jpg|left|120px|]]
=== Grid'5000 users get 2 out of 3 [http://www.ipdps.org/ipdps2011/2011_phd_forum.html Best Poster award at IPDPS 2011] ===
Congratulations go to Alexandra Carpen-Amarie ( IRISA, University Rennes 1, INRIA, Rennes, France) for her poster ''Towards a Self-Adaptive Data Management System for Cloud Environments'' and to Pierre Riteau (INRIA, IRISA, Rennes, France) for his poster ''Building Large Scale Dynamic Computing Infrastructures over Distributed Clouds''
----
[[Grid5000:News|read more news]]


<br>
=== Grid'5000 sites===
==Grid'5000 at a glance==
{|width="100%" cellspacing="3"  
[[Image:site_map.png|thumbnail|128px|right|Grid'5000 sites]]
* '''Grid'5000''' is a scientific instrument for the study of large scale parallel and distributed systems. It aims at providing a '''highly reconfigurable, controlable and monitorable experimental platform''' to its users. The initial aim (circa 2003) was to reach 5000 processors in the platform. It has been reframed at 5000 cores, and was reached during winter 2008-2009.
* The infrastructure of Grid'5000 is geographically distributed on different sites hosting the instrument, initially 9 sites in France (10 since 2011). Porto Alegre, Brazil is now officially becoming the first site abroad.
 
===Sites:===
{|width="75%" cellspacing="3"  
|- valign="top"
|- valign="top"
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
* [[Bordeaux:Home|Bordeaux]]
* [[Grenoble:Home|Grenoble]]
* [[Grenoble:Home|Grenoble]]
* [[Lille:Home|Lille]]
* [[Lille:Home|Lille]]
Line 70: Line 53:
* [[Lyon:Home|Lyon]]
* [[Lyon:Home|Lyon]]
* [[Nancy:Home|Nancy]]
* [[Nancy:Home|Nancy]]
* [[Orsay:Home|Orsay]]
* [[Nantes:Home|Nantes]]
* [[PortoAlegre:Home|Porto Alegre]]
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
|width="33%" bgcolor="#f5f5f5" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
* [[Reims:Home|Reims]]
* [[Rennes:Home|Rennes]]
* [[Rennes:Home|Rennes]]
* [[Sophia:Home|Sophia-Antipolis]]
* [[Sophia:Home|Sophia-Antipolis]]
Line 79: Line 60:
|-
|-
|}
|}
[[Image:Software layers.png|thumbnail|271px|left|Grid'5000 allows experiments in all these software layers]]
* '''Grid'5000''' is a research effort developing a '''large scale nation wide infrastructure for large scale parallel and distributed computing research'''.
* '''19 [[Grid5000:Laboratories|laboratories]]''' are involved in France with the objective of providing the community a testbed allowing experiments in all the software layers between the network protocols up to the applications.
The current plans are to extend from the 9 initial sites each with 100 to a thousand PCs, connected by the [http://www.renater.fr RENATER] Education and Research Network to a bigger platform including a few sites outside France not necessarily connected through a dedicated network connection. Sites in Brazil and Luxembourg should join shortly, and Reims has now joined.
All sites in France are connected to [http://www.renater.fr RENATER] with a 10Gb/s link, except Reims, for the time linked through a 1Gb/s
This high collaborative research effort is funded by INRIA, CNRS, the Universities of all sites and some regional councils.
== ALADDIN-G5K : ensuring the development of '''Grid'5000''' ==
For the 2008-2012 period, Engineers ensuring the development and day to day support of the infrastructure are mostly provided by INRIA, under the ''ADT ALADDIN-G5K''  initiative.
==[[Hemera|HEMERA: Demonstrating ambitious up-scaling techniques on '''Grid'5000''']] ==
[[Hemera|Héméra]] is an INRIA Large Wingspan project, started in 2010, that aims at demonstrating ambitious up-scaling techniques for large scale distributed computing by carrying out several dimensioning experiments on the Grid’5000 infrastructure, at animating the scientific community around Grid’5000 and at enlarging the Grid’5000 community by helping newcomers to make use of Grid’5000.
== Initial Rationale==
'''The foundations of Grid'5000''' have emerged from a thorough analysis and numerous discussions about methodologies used for scientific research in the Grid domain. A report presents the [http://www-sop.inria.fr/aci/grid/public/Library/rapport-grid5000-V3.pdf rationale for Grid'5000].
In addition to theory, simulators and emulators, there is a strong need for '''large scale testbeds''' where real life experimental conditions hold. '''The size of Grid'5000''', in terms of number of sites and number of processors per site, was established according to the scale of the experiments and the number of researchers involved in the project.


== Current funding ==
== Current funding ==
As from June 2008, INRIA is the main contributor to [[Grid5000:Funding|Grid'5000 funding]].  
As from June 2008, Inria is the main contributor to [[Grid5000:Funding|Grid'5000 funding]].  
{|width="100%" cellspacing="3"
{|width="100%" cellspacing="3"
|-
|-
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
===INRIA===
===INRIA===
[[Image:Logo-inria.png]]
[[Image:Logo_INRIA.gif|300px]]
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
===CNRS===
===CNRS===
[[Image:CNRS-filaire-MonoBleu.gif|100px]]
[[Image:CNRS-filaire-Quadri.png|125px]]
|-
|-
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
===Universities===
===Universities===
University Joseph Fourier, Grenoble<br/>
IMT Atlantique<br/>
University of Rennes 1, Rennes<br/>
Université Grenoble Alpes, Grenoble INP<br/>
Université Rennes 1, Rennes<br/>
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse<br/>
Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse<br/>
University Bordeaux 1, Bordeaux<br/>
Université Bordeaux 1, Bordeaux<br/>
University Lille 1, Lille<br/>
Université Lille 1, Lille<br/>
Ecole Normale Supérieure, Lyon<br/>
École Normale Supérieure, Lyon<br/>
Université de Reims Champagne-Ardenne, Reims<br/>
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
| width="50%" bgcolor="#f5f5f5" valign="top" align="center" style="border:1px solid #cccccc;padding:1em;padding-top:0.5em;"|
===Regional councils===
===Regional councils===
Aquitaine<br/>
Aquitaine<br/>
Auvergne-Rhône-Alpes<br/>
Bretagne<br/>
Bretagne<br/>
Champagne-Ardenne<br/>
Champagne-Ardenne<br/>
Provence Alpes Côte d'Azur<br/>
Provence Alpes Côte d'Azur<br/>
Nord Pas de Calais<br/>
Hauts de France<br/>
Lorraine<br/>
Lorraine<br/>
|}
|}

Latest revision as of 10:29, 26 October 2023

Grid'5000

Grid'5000 is a large-scale and flexible testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data and AI.

Key features:

  • provides access to a large amount of resources: 15000 cores, 800 compute-nodes grouped in homogeneous clusters, and featuring various technologies: PMEM, GPU, SSD, NVMe, 10G and 25G Ethernet, Infiniband, Omni-Path
  • highly reconfigurable and controllable: researchers can experiment with a fully customized software stack thanks to bare-metal deployment features, and can isolate their experiment at the networking layer
  • advanced monitoring and measurement features for traces collection of networking and power consumption, providing a deep understanding of experiments
  • designed to support Open Science and reproducible research, with full traceability of infrastructure and software changes on the testbed
  • a vibrant community of 500+ users supported by a solid technical team


Read more about our teams, our publications, and the usage policy of the testbed. Then get an account, and learn how to use the testbed with our Getting Started tutorial and the rest of our Users portal.

Grid'5000 is merging with FIT to build the SILECS Infrastructure for Large-scale Experimental Computer Science. Read an Introduction to SILECS (April 2018)


Recently published documents and presentations:

Older documents:


Grid'5000 is supported by a scientific interest group (GIS) hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations. Inria has been supporting Grid'5000 through ADT ALADDIN-G5K (2007-2013), ADT LAPLACE (2014-2016), and IPL HEMERA (2010-2014).


Current status (at 2025-05-15 00:04): 7 current events, 6 planned (details)


Random pick of publications

Five random publications that benefited from Grid'5000 (at least 2758 overall):

  • Etienne Le Louet, Antoine Blin, Julien Sopena, Ahmed Amamou, Kamel Haddadou. Effects of secured DNS transport on resolver performance. 2023 IEEE Symposium on Computers and Communications (ISCC), Jul 2023, Gammarth, Tunisia. pp.238-244, 10.1109/ISCC58397.2023.10217887. hal-04220131 view on HAL pdf
  • Francisco Pinto, Veronica Carlsson, Mathias Meunier, Bert Van Bocxlaer, Hammouda Elbez, et al.. Morphometrics and machine learning discrimination of the middle Eocene radiolarian species Podocyrtis chalara, Podocyrtis goetheana and their morphological intermediates. Marine Micropaleontology, In press, pp.102293. 10.1016/j.marmicro.2023.102293. hal-04215322 view on HAL pdf
  • Felix Gaschi, Patricio Cerda, Parisa Rastin, Yannick Toussaint. Exploring the Relationship between Alignment and Cross-lingual Transfer in Multilingual Transformers. Findings of the Association for Computational Linguistics: ACL 2023, Jul 2023, Toronto, Canada. pp.3020-3042, 10.18653/v1/2023.findings-acl.189. hal-04193179 view on HAL pdf
  • Prerak Srivastava, Antoine Deleforge, Archontis Politis, Emmanuel Vincent. How to (Virtually) Train Your Speaker Localizer. INTERSPEECH 2023, Aug 2023, Dublin, Ireland. hal-03855912v3 view on HAL pdf
  • Gaël Vila, Emmanuel Medernach, Inés Gonzalez, Axel Bonnet, Yohan Chatelain, et al.. The Impact of Hardware Variability on Applications Packaged with Docker and Guix: a Case Study in Neuroimaging. ACM REP'24, ACM, Jun 2024, Rennes, France. pp.75-84, 10.1145/3641525.3663626. hal-04480308v2 view on HAL pdf


Latest news

Rss.svgCluster chirop is now in the default queue of Lille with energy monitoring.

Dear users,

We are pleased to announce that the Chirop[1] cluster of Lille is now available in the default queue.

This cluster consists of 5 HPE DL360 Gen10+ nodes with:

  • 2 CPU Intel Xeon Platinum 8358 (32 cores per CPU)
  • 512 GiB memory
  • 1*1.92TB SSD NVME + 2*3.84TB SSD
  • 2*25 Gbps Ethernet interface
  • Energy monitoring[2] is also available for this cluster[3], provided by newly installed Wattmetres (similar to those already available at Lyon).

    This cluster was funded by CPER CornelIA.

    [1] https://www.grid5000.fr/w/Lille:Hardware#chirop

    [2] https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial [3] https://www.grid5000.fr/w/Monitoring_Using_Kwollect#Metrics_available_in_Grid.275000

    -- Grid'5000 Team 16:25, 05 May 2025 (CEST)

    Rss.svgChange of default queue based on platform

    Until now, Abaca (production) users had to specify `-q production` when reserving Abaca resources with OAR.

    This is no longer necessary as your default queue is now automatically selected based on the platform your default group is associated to, as shown at https://api.grid5000.fr/explorer/selector/ and in the message displayed when connecting to a frontend.

    For SLICES-FR users, there is no change since the correct queue was already selected by default.

    Additionally, the "production" queue has been renamed to "abaca", although "production" will continue to work for the foreseeable future.

    Please note one case where this change may affect your workflow:

    When an Abaca user reserves a resource from SLICES-FR (a non-production resource), they must explicitly specify they want to use the SLICES-FR queue, which is called "default", by adding `-q default` the OAR command.

    -- Abaca Grid'5000 Team 10:10, 31 March 2025 (CEST)

    Rss.svgCluster "musa" with Nvidia H100 GPUs is available in production queue

    We are pleased to announce that a new cluster named "musa" is available in the production queue¹ of Abaca.

    This cluster has been funded by Inria DSI as a shared computing resource.

    It is accessible to all Abaca users. Users affiliated with Inria have access with the same level of priority, regardless of the research center to which they are attached.

    This cluster is composed of six HPE Proliant DL385 Gen11 nodes² with 2 AMD EPYC 9254 24-Core Processor, 512 GiB of RAM, 2 x Nvidia H100 NVL (94 GiB) with NVLink, one 6 TB SSD NVME and 25 Gbps Ethernet Connexion

    Please note that in order to share it efficiently, walltime is limited:

  • 6 hours for the first two nodes
  • 24 hours for the next two
  • 48 hours for the last two
  • The cluster "musa" is located at Sophia, hosted in the datacenter of Inria Centre at Université Côte d’Azur.

    ¹: https://api.grid5000.fr/explorer/hardware/sophia/#musa

    ²: the nodes are named musa-1, musa-2,.., musa-6

    -- Grid'5000 Team 13:30, 19 March 2025 (CEST)

    Rss.svgCluster "Hydra" is now in the testing queue in Lyon

    We are pleased to announce that the hydra[1] cluster of Lyon is now available in the testing queue.

    Hydra is a cluster composed of 4 NVIDIA Grace-Hopper servers[2].

    Each node features:

  • 1 Nvidia Grace ARM64 CPU with 72 cores (Neoverse-V2)
  • 1 Nvidia Hopper GPU
  • 512GB LPDDR5 memory
  • 96GB HBM memory
  • 1x1To SSD NVME + 1x1.92To SCSI disk
  • Due to its bleeding edge hardware, usual Grid'5000 environments are not supported by default for this cluster.

    (Hydra requires system environments featuring a Linux kernel >= 6.6). The default system on the hydra nodes is based on Debian11, but **does not provide functional GPU**. However, users may deploy the ubuntugh2404-arm64-big environment, which is similar to official Nvidia image provided for this machine and provides GPU support.

    To submit a job on this cluster, the following command may be used:

    oarsub -q testing -t exotic -p hydra

    This cluster is funded by INRIA and by Laboratoire de l'Informatique du Parallélisme with ENS Lyon support.

    [1] Hydra is the largest of the modern constellations according to Wikipedia: https://en.wikipedia.org/wiki/Hydra_(constellation)

    [2] https://developer.nvidia.com/blog/nvidia-grace-hopper-superchip-architecture-in-depth/

    -- Grid'5000 Team 16:10, 11 March 2025 (CEST)


    Read more news

    Grid'5000 sites

    Current funding

    As from June 2008, Inria is the main contributor to Grid'5000 funding.

    INRIA

    Logo INRIA.gif

    CNRS

    CNRS-filaire-Quadri.png

    Universities

    IMT Atlantique
    Université Grenoble Alpes, Grenoble INP
    Université Rennes 1, Rennes
    Institut National Polytechnique de Toulouse / INSA / FERIA / Université Paul Sabatier, Toulouse
    Université Bordeaux 1, Bordeaux
    Université Lille 1, Lille
    École Normale Supérieure, Lyon

    Regional councils

    Aquitaine
    Auvergne-Rhône-Alpes
    Bretagne
    Champagne-Ardenne
    Provence Alpes Côte d'Azur
    Hauts de France
    Lorraine