Nancy:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 20: Line 20:
!|Accelerators
!|Accelerators
|-
|-
|[[#graffiti|graffiti]]||<b>[[Grid5000:UsagePolicy#Rules_for_the_production_queue|production]]</b>&nbsp;queue||2019-06-07||13||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Silver&nbsp;4110||8&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="446"|<b>479&nbsp;GB&nbsp;HDD</b>||data-sort-value="10000"|10&nbsp;Gbps&nbsp;||4&nbsp;x&nbsp;Nvidia RTX&nbsp;2080&nbsp;Ti
|[[#graffiti|graffiti]]||<b>[[Grid5000:UsagePolicy#Rules_for_the_production_queue|production]]</b>&nbsp;queue||2019-06-07||13||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;Silver&nbsp;4110||8&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="446"|<b>479&nbsp;GB&nbsp;HDD</b>||data-sort-value="10000"|10&nbsp;Gbps&nbsp;||[1-12]: 4&nbsp;x&nbsp;Nvidia RTX&nbsp;2080&nbsp;Ti<br />13: 4&nbsp;x&nbsp;Nvidia Quadro&nbsp;RTX&nbsp;6000
|-
|-
|[[#graoully|graoully]]||<b>[[Grid5000:UsagePolicy#Rules_for_the_production_queue|production]]</b>&nbsp;queue||2016-01-04||16||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;E5-2630&nbsp;v3||8&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="1116"|<b>1&nbsp;x&nbsp;600&nbsp;GB&nbsp;HDD</b> +&nbsp;1&nbsp;x&nbsp;600&nbsp;GB&nbsp;HDD||data-sort-value="66000"|10&nbsp;Gbps&nbsp;+&nbsp;56&nbsp;Gbps&nbsp;InfiniBand||
|[[#graoully|graoully]]||<b>[[Grid5000:UsagePolicy#Rules_for_the_production_queue|production]]</b>&nbsp;queue||2016-01-04||16||2&nbsp;x&nbsp;Intel&nbsp;Xeon&nbsp;E5-2630&nbsp;v3||8&nbsp;cores/CPU||128&nbsp;GiB||data-sort-value="1116"|<b>1&nbsp;x&nbsp;600&nbsp;GB&nbsp;HDD</b> +&nbsp;1&nbsp;x&nbsp;600&nbsp;GB&nbsp;HDD||data-sort-value="66000"|10&nbsp;Gbps&nbsp;+&nbsp;56&nbsp;Gbps&nbsp;InfiniBand||
Line 352: Line 352:
== graffiti ==
== graffiti ==


'''13 nodes, 26 cpus, 208 cores''' ([https://public-api.grid5000.fr/stable/sites/nancy/clusters/graffiti/nodes.json?pretty=1 json])
'''13 nodes, 26 cpus, 208 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/nancy/clusters/graffiti/nodes.json?pretty=1 json])


'''Reservation example:'''
'''Reservation example:'''
Line 358: Line 358:
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code class="replace">-q production</code> <code class="env">-p "cluster='graffiti'"</code> <code>-I</code>}}
{{Term|location=fnancy|cmd=<code class="command">oarsub</code> <code class="replace">-q production</code> <code class="env">-p "cluster='graffiti'"</code> <code>-I</code>}}


; graffiti-[1-12] (12 nodes, 24 cpus, 192 cores)
{|
{|
|-
|-
Line 387: Line 388:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:'''
| 4&nbsp;x&nbsp;Nvidia GeForce&nbsp;RTX&nbsp;2080&nbsp;Ti<br/>
| 4&nbsp;x&nbsp;Nvidia GeForce&nbsp;RTX&nbsp;2080&nbsp;Ti<br/>
|-
|}
<hr style="height:10pt; visibility:hidden;" />
; graffiti-13 (1 node, 2 cpus, 16 cores)
{|
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:'''
| production queue<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| Dell PowerEdge T640<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Date of arrival:'''
| 2019-06-07<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''CPU:'''
| Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2&nbsp;CPUs/node, 8&nbsp;cores/CPU)<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Memory:'''
| 128&nbsp;GiB<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|  479&nbsp;GB HDD SATA Dell PERC H330 Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0)  (primary disk)<br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
|
* <span style="color:grey">eth0/eno1np0, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment</span><br />
* <span style="color:grey">eth1/eno2np1, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment</span><br />
* eth2/ens4f0, Ethernet, configured rate: 10&nbsp;Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e<br />
* <span style="color:grey">eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment</span><br/>
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''GPU:'''
| 4&nbsp;x&nbsp;Nvidia Quadro&nbsp;RTX&nbsp;6000<br/>
|-
|-
|}
|}
Line 645: Line 680:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|  
|  
* 299&nbsp;GB HDD SAS Dell PERC H730 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0)  (primary disk)<br />
* 299&nbsp;GB HDD SAS Dell PERC H730 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0)  (primary disk)<br />
* 299&nbsp;GB HDD SAS Dell PERC H730 Mini (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:1:0)  <br/>
* 299&nbsp;GB HDD SAS Dell PERC H730 Mini (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0)  <br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Network:'''
Line 776: Line 811:
* ib0, Omni-Path, configured rate: 100&nbsp;Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/>
* ib0, Omni-Path, configured rate: 100&nbsp;Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1<br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2021-04-26 ([https://github.com/grid5000/reference-repository/commit/f5f4b10b03 commit f5f4b10b03])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2021-05-05 ([https://github.com/grid5000/reference-repository/commit/deee4fce38 commit deee4fce38])</small>''

Revision as of 14:42, 5 May 2021

See also: Network topology for Nancy

Summary

14 clusters, 376 nodes, 7912 cores, 325.5 TFLOPS

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
graffiti production queue 2019-06-07 13 2 x Intel Xeon Silver 4110 8 cores/CPU 128 GiB 479 GB HDD 10 Gbps  [1-12]: 4 x Nvidia RTX 2080 Ti
13: 4 x Nvidia Quadro RTX 6000
graoully production queue 2016-01-04 16 2 x Intel Xeon E5-2630 v3 8 cores/CPU 128 GiB 1 x 600 GB HDD + 1 x 600 GB HDD 10 Gbps + 56 Gbps InfiniBand
graphique production queue 2015-05-12 6 2 x Intel Xeon E5-2620 v3 6 cores/CPU 64 GiB 299 GB HDD 10 Gbps + 56 Gbps InfiniBand 1: 2 x Nvidia Titan Black
[2-6]: 2 x Nvidia GTX 980
graphite 2013-12-05 4 2 x Intel Xeon E5-2650 8 cores/CPU 256 GiB 1 x 300 GB SSD + 1 x 300 GB SSD 10 Gbps + 56 Gbps InfiniBand Intel Xeon Phi 7120P
grappe production queue 2020-08-20 16 2 x Intel Xeon Gold 5218R 20 cores/CPU 96 GiB 480 GB SSD + 8.0 TB HDD* 25 Gbps 
grcinq production queue 2013-04-09 47 2 x Intel Xeon E5-2650 8 cores/CPU 64 GiB 1.0 TB HDD 1 Gbps + 56 Gbps InfiniBand
grele production queue 2017-06-26 14 2 x Intel Xeon E5-2650 v4 12 cores/CPU 128 GiB 1 x 299 GB HDD + 1 x 299 GB HDD 10 Gbps + 100 Gbps Omni-Path 2 x Nvidia GTX 1080 Ti
grimani production queue 2016-08-30 6 2 x Intel Xeon E5-2603 v3 6 cores/CPU 64 GiB 1.0 TB HDD 10 Gbps + 100 Gbps Omni-Path 2 x Nvidia Tesla K40M
grimoire 2016-01-22 8 2 x Intel Xeon E5-2630 v3 8 cores/CPU 128 GiB 600 GB HDD + 4 x 600 GB HDD* + 200 GB SSD* 4 x 10 Gbps + 56 Gbps InfiniBand
grisou 2016-01-04 51 2 x Intel Xeon E5-2630 v3 8 cores/CPU 128 GiB 1 x 600 GB HDD + 1 x 600 GB HDD [1-48]: 1 Gbps + 4 x 10 Gbps 
49: 4 x 10 Gbps 
[50-51]: 4 x 10 Gbps + 56 Gbps InfiniBand
gros 2019-09-04 124 Intel Xeon Gold 5220 18 cores/CPU 96 GiB 480 GB SSD + 960 GB SSD* 2 x 25 Gbps 
grouille exotic job type 2021-01-13 2 2 x AMD EPYC 7452 32 cores/CPU 128 GiB 1.92 TB SSD + 960 GB SSD* 25 Gbps  2 x Nvidia A100
grue production queue 2019-11-25 5 2 x AMD EPYC 7351 16 cores/CPU 128 GiB 479 GB HDD 10 Gbps  4 x Nvidia Tesla T4
grvingt production queue 2018-04-11 64 2 x Intel Xeon Gold 6130 16 cores/CPU 192 GiB 1.0 TB HDD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in default queue

graphite

4 nodes, 8 cpus, 64 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -p "cluster='graphite'" -I
Model: Dell PowerEdge R720
Date of arrival: 2013-12-05
CPU: Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 256 GiB
Storage:
  • 300 GB SSD SATA Intel INTEL SSDSC2BB30 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:0:0) (primary disk)
  • 300 GB SSD SATA Intel INTEL SSDSC2BB30 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
Xeon Phi: Intel Xeon Phi 7120P

grimoire

8 nodes, 16 cpus, 128 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -p "cluster='grimoire'" -I
Model: Dell PowerEdge R630
Date of arrival: 2016-01-22
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0) (reservable)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdc*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:2:0) (reservable)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdd*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:3:0) (reservable)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sde*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:4:0) (reservable)
  • 200 GB SSD SAS Toshiba PX02SSF020 (dev: /dev/sdf*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:5:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/enp129s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth3/enp129s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

grisou

51 nodes, 102 cpus, 816 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -p "cluster='grisou'" -I
grisou-[1-48] (48 nodes, 96 cpus, 768 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth4/eno3, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb (multi NICs example)
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

grisou-49 (1 node, 2 cpus, 16 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

grisou-[50-51] (2 nodes, 4 cpus, 32 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe (multi NICs example)
  • eth2/enp129s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth3/enp129s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe (multi NICs example)
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

gros

124 nodes, 124 cpus, 2232 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -p "cluster='gros'" -I
gros-[1-67,69-124] (123 nodes, 123 cpus, 2214 cores)
Model: Dell PowerEdge R640
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP, 2.20GHz, 1 CPU/node, 18 cores/CPU)
Memory: 96 GiB
Storage:
  • 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • 960 GB SSD SATA Micron MTFDDAK960TDN (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core (multi NICs example)

gros-68 (1 node, 1 cpu, 18 cores)
Model: Dell PowerEdge R640
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP, 2.20GHz, 1 CPU/node, 18 cores/CPU)
Memory: 96 GiB
Storage:
  • 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • 960 GB SSD SATA Intel SSDSC2KG960G8R (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core (multi NICs example)

grouille

2 nodes, 4 cpus, 128 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -t exotic -p "cluster='grouille'" -I
Access condition: exotic job type
Model: Dell PowerEdge R7525
Date of arrival: 2021-01-13
CPU: AMD EPYC 7452 (Zen, 2 CPUs/node, 32 cores/CPU)
Memory: 128 GiB
Storage:
  • 1.92 TB SSD SAS Toshiba KRM5XVUG1T92 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0) (primary disk)
  • 960 GB SSD SATA Micron MTFDDAK960TDT (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:2:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/eno1, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A100-PCIE-40GB

Clusters in production queue

graffiti

13 nodes, 26 cpus, 208 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='graffiti'" -I
graffiti-[1-12] (12 nodes, 24 cpus, 192 cores)
Access condition: production queue
Model: Dell PowerEdge T640
Date of arrival: 2019-06-07
CPU: Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage: 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1np0, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth1/eno2np1, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
GPU: 4 x Nvidia GeForce RTX 2080 Ti

graffiti-13 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge T640
Date of arrival: 2019-06-07
CPU: Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage: 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1np0, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth1/eno2np1, Ethernet, model: Broadcom BCM57416 NetXtreme-E 10GBase-T RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
GPU: 4 x Nvidia Quadro RTX 6000

graoully

16 nodes, 32 cpus, 256 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='graoully'" -I
Access condition: production queue
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/enp129s0f0, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth3/enp129s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

graphique

6 nodes, 12 cpus, 72 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='graphique'" -I
graphique-1 (1 node, 2 cpus, 12 cores)
Access condition: production queue
Model: Dell PowerEdge R720
Date of arrival: 2015-05-12
CPU: Intel Xeon E5-2620 v3 (Haswell, 2.40GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 64 GiB
Storage: 299 GB HDD RAID-1 (2 disks) Dell PERC H330 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/eno2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/eno3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/eno4, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 2 x Nvidia GeForce GTX TITAN Black

graphique-[2-6] (5 nodes, 10 cpus, 60 cores)
Access condition: production queue
Model: Dell PowerEdge R720
Date of arrival: 2015-05-12
CPU: Intel Xeon E5-2620 v3 (Haswell, 2.40GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 64 GiB
Storage: 299 GB HDD RAID-1 (2 disks) Dell PERC H330 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/eno2, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/eno3, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/eno4, Ethernet, model: Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 2 x Nvidia GeForce GTX 980

grappe

16 nodes, 32 cpus, 640 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='grappe'" -I
Access condition: production queue
Model: Dell PowerEdge R640
Date of arrival: 2020-08-20
CPU: Intel Xeon Gold 5218R (Cascade Lake-SP, 2.10GHz, 2 CPUs/node, 20 cores/CPU)
Memory: 96 GiB
Storage:
  • 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:0:0) (primary disk)
  • 8.0 TB HDD SAS Seagate ST8000NM0185 (dev: /dev/sdb*, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:1:0) (reservable)

*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier

Network:
  • eth0/ens1f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens1f1, Ethernet, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e - unavailable for experiment
  • eth2/eno1, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom NetXtreme BCM5720 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

grcinq

47 nodes, 94 cpus, 752 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='grcinq'" -I
grcinq-[1,5,8,18,30,46] (6 nodes, 12 cpus, 96 cores)
Access condition: production queue
Model: Dell PowerEdge C6220
Date of arrival: 2013-04-09
CPU: Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

grcinq-[2-4,6-7,9-17,19-29,31-45,47] (41 nodes, 82 cpus, 656 cores)
Access condition: production queue
Model: Dell PowerEdge C6220
Date of arrival: 2013-04-09
CPU: Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: 1.0 TB HDD SATA Western Digital WDC WD1003FBYX-1 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

grele

14 nodes, 28 cpus, 336 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='grele'" -I
Access condition: production queue
Model: Dell PowerEdge R730
Date of arrival: 2017-06-26
CPU: Intel Xeon E5-2650 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 12 cores/CPU)
Memory: 128 GiB
Storage:
  • 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0) (primary disk)
  • 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/sdb, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
GPU: 2 x Nvidia GeForce GTX 1080 Ti

grimani

6 nodes, 12 cpus, 72 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='grimani'" -I
Access condition: production queue
Model: Dell PowerEdge R730
Date of arrival: 2016-08-30
CPU: Intel Xeon E5-2603 v3 (Haswell, 1.60GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 64 GiB
Storage: 1.0 TB HDD SATA Seagate ST1000NX0423 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
GPU: 2 x Nvidia Tesla K40m

grue

5 nodes, 10 cpus, 160 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='grue'" -I
Access condition: production queue
Model: Dell PowerEdge R7425
Date of arrival: 2019-11-25
CPU: AMD EPYC 7351 (Zen, 2 CPUs/node, 16 cores/CPU)
Memory: 128 GiB
Storage: 479 GB HDD SAS Dell PERC H730P Adp (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:e1:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 4 x Nvidia Tesla T4

grvingt

64 nodes, 128 cpus, 2048 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p "cluster='grvingt'" -I
Access condition: production queue
Model: Dell PowerEdge C6420
Date of arrival: 2018-04-11
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage: 1.0 TB HDD SATA Seagate ST1000NX0443 (dev: /dev/sda, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Last generated from the Grid'5000 Reference API on 2021-05-05 (commit deee4fce38)