Lyon:Hardware

From Grid5000
Revision as of 18:28, 9 May 2022 by Pjacquot (talk | contribs)
Jump to navigation Jump to search

See also: Network topology for Lyon

Summary

  • 9 clusters
  • 68 nodes
  • 1544 CPU cores
  • 107 GPUs
  • 9.99 TiB RAM
  • 34 SSDs and 59 HDDs on nodes (total: 99.14 TB)
  • 43.8 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Nodes CPU Cores Memory Storage Network Accelerators
gemini exotic job type 2019-09-01 2 2 x Intel Xeon E5-2698 v4 20 cores/CPU 512 GiB 480 GB SSD + 4 x 1.92 TB SSD* 10 Gbps (SR‑IOV) + 3 x 100 Gbps InfiniBand 8 x Nvidia Tesla V100 (32 GiB)
hercule 2012-10-02 4 2 x Intel Xeon E5-2620 6 cores/CPU 32 GiB 1 x 2.0 TB HDD + 2 x 2.0 TB HDD 10 Gbps (SR‑IOV) 
neowise exotic job type 2021-05-17 10 AMD EPYC 7642 48 cores/CPU 512 GiB 1.92 TB SSD 2 x 10 Gbps (SR‑IOV) + 2 x 100 Gbps InfiniBand 8 x AMD MI50 (32 GiB)
nova 2016-12-01 22 2 x Intel Xeon E5-2620 v4 8 cores/CPU 64 GiB 598 GB HDD 10 Gbps (SR‑IOV) 
orion 2012-09-14 3 2 x Intel Xeon E5-2630 6 cores/CPU 32 GiB 299 GB HDD 10 Gbps (SR‑IOV)  Nvidia Tesla M2075 (5 GiB)
pyxis exotic job type 2020-01-06 4 2 x ARM ThunderX2 99xx 32 cores/CPU 256 GiB 1 x 250 GB SSD + 1 x 250 GB SSD 10 Gbps (SR‑IOV) + 100 Gbps InfiniBand
sagittaire 2006-07-01 10 2 x AMD Opteron 250 1 core/CPU 2 GiB 73 GB HDD 1 Gbps 
sirius exotic job type 2021-11-18 1 2 x AMD EPYC 7742 64 cores/CPU 1.0 TiB 1 x 1.92 TB SSD + 1 x 1.92 TB SSD + 4 x 3.84 TB SSD 1 Gbps  8 x Nvidia A100 (40 GiB)
taurus 2012-09-14 12 2 x Intel Xeon E5-2630 6 cores/CPU 32 GiB 299 GB HDD 10 Gbps (SR‑IOV) 

*: disk is reservable

Clusters in the default queue

gemini

2 nodes, 4 cpus, 80 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p gemini -I
Access condition: exotic job type
Model: Nvidia DGX-1
Date of arrival: 2019-09-01
CPU: Intel Xeon E5-2698 v4 (Broadwell, 2.20GHz, 2 CPUs/node, 20 cores/CPU)
Memory: 512 GiB
Storage:
  • disk0, 480 GB SSD SATA Samsung SAMSUNG MZ7KM480 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:14:0) (primary disk)
  • disk1, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:15:0) (reservable)
  • disk2, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:16:0) (reservable)
  • disk3, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:17:0) (reservable)
  • disk4, 1.92 TB SSD SATA Samsung SAMSUNG MZ7LM1T9 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:81:00.0-scsi-0:0:18:0) (reservable)
Network:
  • eth0/enp1s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller 10-Gigabit X540-AT2, driver: ixgbe, SR-IOV enabled
  • eth1/enp1s0f1, Ethernet, model: Intel Ethernet Controller 10-Gigabit X540-AT2, driver: ixgbe - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib2, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core
  • ib3, InfiniBand, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core - unavailable for experiment
GPU: 8 x Nvidia Tesla V100-SXM2-32GB (32 GiB)
Compute capability: 7.0

hercule

4 nodes, 8 cpus, 48 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p hercule -I
Model: Dell PowerEdge C6220
Date of arrival: 2012-10-02
CPU: Intel Xeon E5-2620 (Sandy Bridge, 2.00GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage:
  • disk0, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1) (primary disk)
  • disk1, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-2)
  • disk2, 2.0 TB HDD SATA Western Digital WDC WD2003FYYS-1 (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:00:1f.2-ata-3)
Network:
  • eth0/enp130s0f0, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment
  • eth1/enp130s0f1, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe, SR-IOV enabled
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

neowise

10 nodes, 10 cpus, 480 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p neowise -I
Access condition: exotic job type
Model: AMD-Penguin Computing
Date of arrival: 2021-05-17
CPU: AMD EPYC 7642 (Zen 2, 1 CPU/node, 48 cores/CPU)
Memory: 512 GiB
Storage: disk0, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:82:00.0-nvme-1) (primary disk)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled
  • eth1/eno2, Ethernet, configured rate: 10 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core, SR-IOV enabled - no KaVLAN
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
GPU: 8 x AMD Radeon Instinct MI50 32GB (32 GiB)

nova

22 nodes, 44 cpus, 352 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p nova -I
Model: Dell PowerEdge R430
Date of arrival: 2016-12-01
CPU: Intel Xeon E5-2620 v4 (Broadwell, 2.10GHz, 2 CPUs/node, 8 cores/CPU)
Memory: 64 GiB
Storage: disk0, 598 GB HDD RAID-0 (2 disks) Dell PERC H330 Mini (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp5s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp5s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

orion

3 nodes, 6 cpus, 36 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p orion -I
Model: Dell PowerEdge R720
Date of arrival: 2012-09-14
CPU: Intel Xeon E5-2630 (Sandy Bridge, 2.30GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage: disk0, 299 GB HDD RAID-0 (1 disk) Dell PERC H710 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp68s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp68s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: Nvidia Tesla M2075 (5 GiB)
Compute capability: 2.0

pyxis

4 nodes, 8 cpus, 256 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p pyxis -I
Access condition: exotic job type
Model: R181-T92-00
Date of arrival: 2020-01-06
CPU: ThunderX2 99xx (Vulcan, 2 CPUs/node, 32 cores/CPU)
Memory: 256 GiB
Storage:
  • disk0, 250 GB SSD SATA Samsung Samsung SSD 860 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:0f:00.0-sas-phy2-lun-0) (primary disk)
  • disk1, 250 GB SSD SATA Samsung Samsung SSD 860 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:0f:00.0-sas-phy3-lun-0)
Network:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller, driver: qede, SR-IOV enabled
  • eth1/eno2, Ethernet, model: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller, driver: qede - unavailable for experiment
  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core

sagittaire

10 nodes, 20 cpus, 20 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flyon:
oarsub -p sagittaire -I
sagittaire-[11-12] (2 nodes, 4 cpus, 4 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Seagate ST373307LC (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

sagittaire-[2-5,13-16] (8 nodes, 16 cpus, 16 cores)
Model: Sun Fire V20z
Date of arrival: 2006-07-01
CPU: AMD Opteron 250 (K8, 2 CPUs/node, 1 core/CPU)
Memory: 2 GiB
Storage: disk0, 73 GB HDD SCSI Seagate ST373207LC (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:04.0-scsi-0:0:1:0) (primary disk)
Network:
  • eth0/enp2s2f0, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3 - unavailable for experiment
  • eth1/enp2s2f1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5704 Gigabit Ethernet, driver: tg3

sirius

1 node, 2 cpus, 128 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -t exotic -p sirius -I
Access condition: exotic job type
Model: NVidia DGX A100
Date of arrival: 2021-11-18
CPU: AMD EPYC 7742 (Zen 2, 2 CPUs/node, 64 cores/CPU)
Memory: 1.0 TiB
Storage:
  • disk0, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:22:00.0-nvme-1) (primary disk)
  • disk1, 1.92 TB SSD NVME Samsung SAMSUNG MZ1LB1T9HALS-00007 (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:23:00.0-nvme-1)
  • disk2, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:09:00.0-nvme-1)
  • disk3, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:52:00.0-nvme-1)
  • disk4, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk4, by-path: /dev/disk/by-path/pci-0000:ca:00.0-nvme-1)
  • disk5, 3.84 TB SSD NVME Kioxia KCM6DRUL3T84 (dev: /dev/disk5, by-path: /dev/disk/by-path/pci-0000:8a:00.0-nvme-1)
Network:
  • eth0/enp226s0, Ethernet, configured rate: 1 Gbps, model: Intel I210 Gigabit Network Connection, driver: igb
  • eth1/enp225s0f0np0, Ethernet, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • eth2/enp225s0f1np1, Ethernet, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib0, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib1, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib2, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib3, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib4, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib5, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib6, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
  • ib7, InfiniBand, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core - unavailable for experiment
GPU: 8 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

taurus

12 nodes, 24 cpus, 144 cores (json)

Reservation example:

Terminal.png flyon:
oarsub -p taurus -I
Model: Dell PowerEdge R720
Date of arrival: 2012-09-14
CPU: Intel Xeon E5-2630 (Sandy Bridge, 2.30GHz, 2 CPUs/node, 6 cores/CPU)
Memory: 32 GiB
Storage: disk0, 299 GB HDD RAID-0 (1 disk) Dell PERC H710 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0) (primary disk)
Network:
  • eth0/enp68s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe, SR-IOV enabled
  • eth1/enp68s0f1, Ethernet, model: Intel Ethernet 10G 2P X520 Adapter, driver: ixgbe - unavailable for experiment
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment

Last generated from the Grid'5000 Reference API on 2022-05-09 (commit ef812ae635)