Lille:Hardware

From Grid5000
Jump to navigation Jump to search

See also: Network topology for Lille

Summary

  • 4 clusters
  • 29 nodes
  • 1024 CPU cores
  • 48 GPUs
  • 284672 GPUs cores
  • 9.0 TiB RAM
  • 71 SSDs and 48 HDDs on nodes (total: 313.02 TB)
  • 57.8 TFLOPS (excluding GPUs)

Clusters

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
chiclet 2018-08-06 2018-07-27 8 2 AMD EPYC 7301 16 cores/CPU x86_64 128 GiB 480 GB SSD + 2 x 4.0 TB HDD* 2 x 25 Gbps 
chifflot 2018-08-01 2018-07-17 8 2 Intel Xeon Gold 6126 12 cores/CPU x86_64 192 GiB 480 GB SSD + 480 GB SSD* + 4 x 4.0 TB HDD* 2 x 25 Gbps  [1-6]: 2 x Nvidia Tesla P100 (16 GiB)
[7-8]: 2 x Nvidia Tesla V100 (32 GiB)
chirop 2024-01-25 2023-05-02 5 2 Intel Xeon Platinum 8358 32 cores/CPU x86_64 512 GiB 1.92 TB SSD + 2 x 3.84 TB SSD 2 x 25 Gbps 
chuc testing queue 2024-01-22 2023-05-02 8 1 AMD EPYC 7513 32 cores/CPU x86_64 512 GiB 1.92 TB SSD + 3 x 1.92 TB SSD 2 x 25 Gbps (SR‑IOV)  4 x Nvidia A100 (40 GiB)

*: disk is reservable

**: crossed GPUs are not supported by Grid'5000 default environments

Clusters in the default queue

chiclet

8 nodes, 16 cpus, 256 cores (json)

Reservation example:

Terminal.png flille:
oarsub -p chiclet -I
Model: Dell PowerEdge R7425
Manufacturing date: 2018-07-27
Date of arrival: 2018-08-06
CPU: AMD EPYC 7301 (Zen), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 480 GB SSD SAS Toshiba PX05SVB048Y (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:2:0) (reservable)
Network:
  • eth0/enp98s0f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/enp98s0f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

chifflot

8 nodes, 16 cpus, 192 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flille:
oarsub -p chifflot -I
chifflot-[1,4-5] (3 nodes, 6 cpus, 72 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-2 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-3 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-6 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Toshiba MG08SDA400NY (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-[7-8] (2 nodes, 4 cpus, 48 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla V100-PCIE-32GB (32 GiB)
Compute capability: 7.0

chirop

5 nodes, 10 cpus, 320 cores (json)

Reservation example:

Terminal.png flille:
oarsub -p chirop -I
Model: DL360 Gen10+
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-25
CPU: Intel Xeon Platinum 8358 (Ice Lake), x86_64, 2.60GHz, 2 CPUs/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD NVME Kioxia KCD6XLUL1T92 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:47:00.0-scsi-0:2:1:0) (primary disk)
  • disk1, 3.84 TB SSD SATA HP VK003840GWSRV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:47:00.0-scsi-0:2:2:0)
  • disk2, 3.84 TB SSD SATA HP VK003840GWSRV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:47:00.0-scsi-0:2:3:0)
Network:
  • eth0/ens10f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en
  • eth1/ens10f1np1, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en (multi NICs example)

Clusters in the testing queue

chuc

8 nodes, 8 cpus, 256 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flille:
oarsub -q testing -p chuc -I
chuc-1 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df66-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df6e-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df72-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df7a-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-2 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df46-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df6a-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df8a-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df8e-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-3 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df2e-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df36-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df5a-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df62-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-4 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df4a-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df4e-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df52-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df56-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-5 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df5e-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df96-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfa6-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfae-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-6 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfaa-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfb2-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfb6-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfda-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-7 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE MO001920RXRRH (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x5000c500ec8d70f5-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfa2-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-8 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df76-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df7e-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df82-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df86-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

Last generated from the Grid'5000 Reference API on 2024-02-26 (commit 35417c85d5)