Nancy:Hardware

From Grid5000
Revision as of 13:23, 13 June 2022 by Nperrin (talk | contribs)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

See also: Network topology for Nancy

Summary

  • 15 clusters
  • 371 nodes
  • 7964 CPU cores
  • 134 GPUs
  • 43.75 TiB RAM
  • 288 SSDs and 332 HDDs on nodes (total: 553.7 TB)
  • 331.8 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
graffiti production queue 2019-06-07 13 2 Intel Xeon Silver 4110 8 cores/CPU x86_64 128 GiB 479 GB HDD 10 Gbps  [1-12]: 4 x Nvidia RTX 2080 Ti (11 GiB)
13: 4 x Nvidia Quadro RTX 6000 (22 GiB)
graoully production queue 2016-01-04 16 2 Intel Xeon E5-2630 v3 8 cores/CPU x86_64 128 GiB 600 GB HDD 10 Gbps (SR‑IOV) + 56 Gbps InfiniBand
graphique production queue 2015-05-12 5 2 Intel Xeon E5-2620 v3 6 cores/CPU x86_64 64 GiB 299 GB HDD 10 Gbps + 56 Gbps InfiniBand 2 x Nvidia GTX 980 (4 GiB)
graphite 2013-12-05 4 2 Intel Xeon E5-2650 8 cores/CPU x86_64 256 GiB 300 GB SSD + 300 GB SSD 10 Gbps (SR‑IOV) + 56 Gbps InfiniBand Intel Xeon Phi 7120P
grappe production queue 2020-08-20 16 2 Intel Xeon Gold 5218R 20 cores/CPU x86_64 96 GiB 480 GB SSD + 8.0 TB HDD* 25 Gbps 
grcinq production queue 2013-04-09 41 2 Intel Xeon E5-2650 8 cores/CPU x86_64 64 GiB 1.0 TB HDD 1 Gbps (SR‑IOV) + 56 Gbps InfiniBand
grele production queue 2017-06-26 14 2 Intel Xeon E5-2650 v4 12 cores/CPU x86_64 128 GiB 299 GB HDD + 299 GB HDD 10 Gbps (SR‑IOV) + 100 Gbps Omni-Path 2 x Nvidia GTX 1080 Ti (11 GiB)
grimani production queue 2016-08-30 6 2 Intel Xeon E5-2603 v3 6 cores/CPU x86_64 64 GiB 1.0 TB HDD 10 Gbps (SR‑IOV) + 100 Gbps Omni-Path 2 x Nvidia Tesla K40M (11 GiB)
grimoire 2016-01-22 8 2 Intel Xeon E5-2630 v3 8 cores/CPU x86_64 128 GiB 600 GB HDD + 4 x 600 GB HDD* + 200 GB SSD* 4 x 10 Gbps (SR‑IOV) + 56 Gbps InfiniBand
grisou 2016-01-04 49 2 Intel Xeon E5-2630 v3 8 cores/CPU x86_64 128 GiB 600 GB HDD + 600 GB HDD [1-32,34-43,45-48]: 1 Gbps + 4 x 10 Gbps (SR‑IOV) 
49: 4 x 10 Gbps (SR‑IOV) 
[50-51]: 4 x 10 Gbps (SR‑IOV) + 56 Gbps InfiniBand
gros 2019-09-04 124 1 Intel Xeon Gold 5220 18 cores/CPU x86_64 96 GiB 480 GB SSD + 960 GB SSD* 2 x 25 Gbps (SR‑IOV) 
grouille exotic job type 2021-01-13 2 2 AMD EPYC 7452 32 cores/CPU x86_64 128 GiB 1.92 TB SSD + 960 GB SSD* 25 Gbps  2 x Nvidia A100 (40 GiB)
grue production queue 2019-11-25 5 2 AMD EPYC 7351 16 cores/CPU x86_64 128 GiB 479 GB HDD 10 Gbps  4 x Nvidia Tesla T4 (15 GiB)
gruss production queue 2021-08-26 4 2 AMD EPYC 7352 24 cores/CPU x86_64 256 GiB 1.92 TB SSD 25 Gbps  2 x Nvidia A40 (45 GiB)
grvingt production queue 2018-04-11 64 2 Intel Xeon Gold 6130 16 cores/CPU x86_64 192 GiB 1.0 TB HDD 10 Gbps + 100 Gbps Omni-Path

*: disk is reservable

Clusters in the default queue

graphite

4 nodes, 8 cpus, 64 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -p graphite -I
Model: Dell PowerEdge R720
Date of arrival: 2013-12-05
CPU: Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 256 GiB
Storage:
  • disk0, 300 GB SSD SATA Intel INTEL SSDSC2BB30 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 300 GB SSD SATA Intel INTEL SSDSC2BB30 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
Xeon Phi: ; Intel Xeon Phi 7120P

grimoire

8 nodes, 16 cpus, 128 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -p grimoire -I
Model: Dell PowerEdge R630
Date of arrival: 2016-01-22
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 200 GB SSD SAS Toshiba PX02SSF020 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth2/enp129s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth3/enp129s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

grisou

49 nodes, 98 cpus, 784 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -p grisou -I
grisou-[1-32,34-43,45-48] (46 nodes, 92 cpus, 736 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth4/eno3, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb (multi NICs example)
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

grisou-49 (1 node, 2 cpus, 16 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth2/enp4s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth3/enp4s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

grisou-[50-51] (2 nodes, 4 cpus, 32 cores)
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth2/enp129s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth3/enp129s0f1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled (multi NICs example)
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

gros

124 nodes, 124 cpus, 2232 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -p gros -I
gros-[1-67,69-124] (123 nodes, 123 cpus, 2214 cores)
Model: Dell PowerEdge R640
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP, 2.20GHz, 1 CPU/node, 18 cores/CPU)
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 960 GB SSD SATA Micron MTFDDAK960TDN (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)

gros-68 (1 node, 1 cpu, 18 cores)
Model: Dell PowerEdge R640
Date of arrival: 2019-09-04
CPU: Intel Xeon Gold 5220 (Cascade Lake-SP, 2.20GHz, 1 CPU/node, 18 cores/CPU)
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 960 GB SSD SATA Intel SSDSC2KG960G8R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:17:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled (multi NICs example)

grouille

2 nodes, 4 cpus, 128 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -t exotic -p grouille -I
Access condition: exotic job type
Model: Dell PowerEdge R7525
Date of arrival: 2021-01-13
CPU: AMD EPYC 7452 (Zen 2, 2 CPUs/node, 32 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 1.92 TB SSD SAS Toshiba KRM5XVUG1T92 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0) (primary disk)
  • disk1, 960 GB SSD SATA Micron MTFDDAK960TDT (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:2:0) (reservable)
Network:
  • eth0/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A100-PCIE-40GB (40 GiB)
Compute capability: 8.0

Clusters in the production queue

graffiti

13 nodes, 26 cpus, 208 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p graffiti -I

Max walltime per nodes:

  • graffiti-[1-3]: 24h
  • graffiti-[4-6]: 48h
  • graffiti-[7-13]: 168h
graffiti-[1-12] (12 nodes, 24 cpus, 192 cores)
Access condition: production queue
Model: Dell PowerEdge T640
Date of arrival: 2019-06-07
CPU: Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage: disk0, 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1np0, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
GPU: 4 x Nvidia GeForce RTX 2080 Ti (11 GiB)
Compute capability: 7.5

graffiti-13 (1 node, 2 cpus, 16 cores)
Access condition: production queue
Model: Dell PowerEdge T640
Date of arrival: 2019-06-07
CPU: Intel Xeon Silver 4110 (Skylake, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage: disk0, 479 GB HDD SATA Dell PERC H330 Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:19:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1np0, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth1/eno2np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment
  • eth2/ens4f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth3/ens4f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
GPU: 4 x Nvidia Quadro RTX 6000 (22 GiB)
Compute capability: 7.5

graoully

16 nodes, 32 cpus, 256 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p graoully -I

Max walltime per nodes:

  • graoully-[1-2]: 4h
  • graoully-[3-4]: 12h
  • graoully-[5-16]: 168h
Access condition: production queue
Model: Dell PowerEdge R630
Date of arrival: 2016-01-04
CPU: Intel Xeon E5-2630 v3 (Haswell, 2.40GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 128 GiB
Storage: disk0, 600 GB HDD SAS Seagate ST600MM0088 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/enp129s0f0, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth3/enp129s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

graphique

5 nodes, 10 cpus, 60 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p graphique -I

Max walltime per nodes:

  • graphique-2: 48h
  • graphique-[3-6]: 168h
Access condition: production queue
Model: Dell PowerEdge R720
Date of arrival: 2015-05-12
CPU: Intel Xeon E5-2620 v3 (Haswell, 2.40GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 64 GiB
Storage: disk0, 299 GB HDD RAID-1 (2 disks) Dell PERC H330 Mini (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth2/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • eth3/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
GPU: 2 x Nvidia GeForce GTX 980 (4 GiB)
Compute capability: 5.2

grappe

16 nodes, 32 cpus, 640 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grappe -I

Max walltime per nodes:

  • grappe-[1-4]: 48h
  • grappe-[5-8]: 96h
  • grappe-[9-16]: 168h
Access condition: production queue
Model: Dell PowerEdge R640
Date of arrival: 2020-08-20
CPU: Intel Xeon Gold 5218R (Cascade Lake-SP, 2.10GHz, 2 CPUs/node, 20 cores/CPU)
Memory: 96 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 8.0 TB HDD SAS Seagate ST8000NM0185 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:1a:00.0-scsi-0:0:1:0) (reservable)
Network:
  • eth0/ens1f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens1f1, Ethernet, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e - unavailable for experiment
  • eth2/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

grcinq

41 nodes, 82 cpus, 656 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grcinq -I

Max walltime per nodes:

  • grcinq-[1-8]: 4h
  • grcinq-[9-16]: 12h
  • grcinq-[18-22]: 168h
grcinq-[1,5,8,18,30] (5 nodes, 10 cpus, 80 cores)
Access condition: production queue
Model: Dell PowerEdge C6220
Date of arrival: 2013-04-09
CPU: Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NM0033-9ZM (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

grcinq-[2-4,6-7,9-16,19-22,24-29,31-36,38-41,43-45] (36 nodes, 72 cpus, 576 cores)
Access condition: production queue
Model: Dell PowerEdge C6220
Date of arrival: 2013-04-09
CPU: Intel Xeon E5-2650 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: disk0, 1.0 TB HDD SATA Western Digital WDC WD1003FBYX-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core

grele

14 nodes, 28 cpus, 336 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grele -I

Max walltime per nodes:

  • grele-[1-3]: 24h
  • grele-[4-6]: 48h
  • grele-[7-14]: 168h
Access condition: production queue
Model: Dell PowerEdge R730
Date of arrival: 2017-06-26
CPU: Intel Xeon E5-2650 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 12 cores/CPU)
Memory: 128 GiB
Storage:
  • disk0, 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0) (primary disk)
  • disk1, 299 GB HDD SAS Dell PERC H730 Mini (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
GPU: 2 x Nvidia GeForce GTX 1080 Ti (11 GiB)
Compute capability: 6.1

grimani

6 nodes, 12 cpus, 72 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grimani -I

Max walltime per nodes:

  • grimani-1: 24h
  • grimani-2: 48h
  • grimani-[3-6]: 168h
Access condition: production queue
Model: Dell PowerEdge R730
Date of arrival: 2016-08-30
CPU: Intel Xeon E5-2603 v3 (Haswell, 1.60GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 64 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NX0423 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1
GPU: 2 x Nvidia Tesla K40m (11 GiB)
Compute capability: 3.5

grue

5 nodes, 10 cpus, 160 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grue -I

Max walltime per nodes:

  • grue-[1-2]: 24h
  • grue-[3-4]: 48h
  • grue-5: 168h
Access condition: production queue
Model: Dell PowerEdge R7425
Date of arrival: 2019-11-25
CPU: AMD EPYC 7351 (Zen, 2 CPUs/node, 16 cores/CPU)
Memory: 128 GiB
Storage: disk0, 479 GB HDD SAS Dell PERC H730P Adp (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:e1:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • eth2/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 4 x Nvidia Tesla T4 (15 GiB)
Compute capability: 7.5

gruss

4 nodes, 8 cpus, 192 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p gruss -I

Max walltime per nodes:

  • gruss-[1-2]: 24h
  • gruss-3: 48h
  • gruss-4: 168h
gruss-1 (1 node, 2 cpus, 48 cores)
Access condition: production queue
Model: Dell PowerEdge R7525
Date of arrival: 2021-08-26
CPU: AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)
Memory: 256 GiB
Storage: disk0, 1.92 TB SSD SATA Samsung MZ7KH1T9HAJR0D3 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A40 (45 GiB)
Compute capability: 8.6

gruss-[2-4] (3 nodes, 6 cpus, 144 cores)
Access condition: production queue
Model: Dell PowerEdge R7525
Date of arrival: 2021-08-26
CPU: AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU)
Memory: 256 GiB
Storage: disk0, 1.92 TB SSD SATA Sk Hynix HFS1T9G32FEH-BA1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:0:0) (primary disk)
Network:
  • eth0/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth2/eno33, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • eth3/eno34, Ethernet, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 2 x Nvidia A40 (45 GiB)
Compute capability: 8.6

grvingt

64 nodes, 128 cpus, 2048 cores (json)

Reservation example:

Terminal.png fnancy:
oarsub -q production -p grvingt -I

Max walltime per nodes:

  • grvingt-[1-8]: 4h
  • grvingt-[9-16]: 12h
  • grvingt-[17-64]: 168h
Access condition: production queue
Model: Dell PowerEdge C6420
Date of arrival: 2018-04-11
CPU: Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU)
Memory: 192 GiB
Storage: disk0, 1.0 TB HDD SATA Seagate ST1000NX0443 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:11.5-ata-3) (primary disk)
Network:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e
  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment
  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Last generated from the Grid'5000 Reference API on 2022-06-13 (commit 08ebd79dcb)