Lille:Hardware: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
 
(4 intermediate revisions by 3 users not shown)
Line 1: Line 1:
__NOTOC__
__NOEDITSECTION__
__NOEDITSECTION__
{{Portal|User}}
{{Portal|User}}
Line 14: Line 13:
* 57.8 TFLOPS (excluding GPUs)
* 57.8 TFLOPS (excluding GPUs)


= Clusters =
= Clusters summary =
== Default queue resources ==
{|class="wikitable sortable"
{|class="wikitable sortable"
|-
|-
Line 34: Line 34:
|-
|-


|[[#chiclet|chiclet]]||||2018-08-06||2018-07-27||8||2||AMD EPYC 7301||16&nbsp;cores/CPU||x86_64||128&nbsp;GiB||data-sort-value="7899"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;2&nbsp;x&nbsp;4.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]]||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;||
|[[#chiclet|chiclet]]||||2018-08-06||2018-07-27||8||2||AMD EPYC 7301||16&nbsp;cores/CPU||x86_64||data-sort-value="137438953472"|128&nbsp;GiB||data-sort-value="7899"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;2&nbsp;x&nbsp;4.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]]||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;||
|-
|-
|[[#chifflot|chifflot]]||||2018-08-01||2018-07-17||8||2||Intel Xeon Gold 6126||12&nbsp;cores/CPU||x86_64||192&nbsp;GiB||data-sort-value="15798"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;480&nbsp;GB&nbsp;SSD[[Disk_reservation|*]] +&nbsp;4&nbsp;x&nbsp;4.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]]||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;||[1-6]: 2&nbsp;x&nbsp;Nvidia Tesla&nbsp;P100&nbsp;(16&nbsp;GiB)<br />[7-8]: 2&nbsp;x&nbsp;Nvidia Tesla&nbsp;V100&nbsp;(32&nbsp;GiB)
|[[#chifflot|chifflot]]||||2018-08-01||2018-07-17||8||2||Intel Xeon Gold 6126||12&nbsp;cores/CPU||x86_64||data-sort-value="206158430208"|192&nbsp;GiB||data-sort-value="15798"|<b>480&nbsp;GB&nbsp;SSD</b> +&nbsp;480&nbsp;GB&nbsp;SSD[[Disk_reservation|*]] +&nbsp;4&nbsp;x&nbsp;4.0&nbsp;TB&nbsp;HDD[[Disk_reservation|*]]||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;||[1-6]: 2&nbsp;x&nbsp;Nvidia Tesla&nbsp;P100&nbsp;(16&nbsp;GiB)<br />[7-8]: 2&nbsp;x&nbsp;Nvidia Tesla&nbsp;V100&nbsp;(32&nbsp;GiB)
|-
|-
|[[#chirop|chirop]]||||2024-01-25||2023-05-02||5||2||Intel Xeon Platinum 8358||32&nbsp;cores/CPU||x86_64||512&nbsp;GiB||data-sort-value="8940"|<b>1.92&nbsp;TB&nbsp;SSD</b> +&nbsp;2&nbsp;x&nbsp;3.84&nbsp;TB&nbsp;SSD||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;||
|}
''*: disk is [[Disk_reservation|reservable]]''&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;''**: crossed GPUs are not supported by Grid'5000 default environments''
== Testing queue resources ==
{|class="wikitable sortable"
|-
|-
|[[#chuc|chuc]]||<b>testing</b>&nbsp;queue||2024-01-22||2023-05-02||8||1||AMD EPYC 7513||32&nbsp;cores/CPU||x86_64||512&nbsp;GiB||data-sort-value="7152"|<b>1.92&nbsp;TB&nbsp;SSD</b> +&nbsp;3&nbsp;x&nbsp;1.92&nbsp;TB&nbsp;SSD||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;||4&nbsp;x&nbsp;Nvidia A100&nbsp;(40&nbsp;GiB)
!rowspan=2|Cluster
!rowspan=2|Access Condition
!rowspan=2|Date of arrival
!rowspan=2|Manufacturing date
!data-sort-type="number" rowspan=2|Nodes
!colspan=4|CPU
!data-sort-type="number" rowspan=2|Memory
!data-sort-type="number" rowspan=2|Storage
!data-sort-type="number" rowspan=2|Network
!rowspan=2|Accelerators
|-
!data-sort-type="number"|#
!|Name
!data-sort-type="number"|Cores
!|Architecture
|-
 
|[[#chirop|chirop]]||<b>testing</b>&nbsp;queue||2024-01-25||2023-05-02||5||2||Intel Xeon Platinum 8358||32&nbsp;cores/CPU||x86_64||data-sort-value="549755813888"|512&nbsp;GiB||data-sort-value="8940"|<b>1.92&nbsp;TB&nbsp;SSD</b> +&nbsp;2&nbsp;x&nbsp;3.84&nbsp;TB&nbsp;SSD||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;||
|-
|[[#chuc|chuc]]||<b>testing</b>&nbsp;queue||2024-01-22||2023-05-02||8||1||AMD EPYC 7513||32&nbsp;cores/CPU||x86_64||data-sort-value="549755813888"|512&nbsp;GiB||data-sort-value="7152"|<b>1.92&nbsp;TB&nbsp;SSD</b> +&nbsp;3&nbsp;x&nbsp;1.92&nbsp;TB&nbsp;SSD||data-sort-value="50000"|2&nbsp;x&nbsp;25&nbsp;Gbps&nbsp;(SR&#8209;IOV)&nbsp;||4&nbsp;x&nbsp;Nvidia A100&nbsp;(40&nbsp;GiB)
|-
|-
|}
|}
''*: disk is [[Disk_reservation|reservable]]''
''*: disk is [[Disk_reservation|reservable]]''&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;''**: crossed GPUs are not supported by Grid'5000 default environments''


''**: crossed GPUs are not supported by Grid'5000 default environments''
= Clusters in the [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/ default queue] =
= Clusters in the [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/ default queue] =


Line 306: Line 327:
|-
|-
|}
|}
= Clusters in the testing queue =


== [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chirop%20only chirop] ==
== [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chirop%20only chirop] ==
Line 313: Line 336:
'''Reservation example:'''
'''Reservation example:'''


{{Term|location=flille|cmd=<code class="command">oarsub</code> <code class="env">-p chirop</code> <code>-I</code>}}
{{Term|location=flille|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="env">-p chirop</code> <code>-I</code>}}


{|
{|
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Access condition:'''
| testing queue<br/>
|-
|-
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Model:'''
Line 344: Line 370:
|-
|-
|}
|}
= Clusters in the testing queue =


== [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chuc%20only chuc] ==
== [https://intranet.grid5000.fr/oar/Lille/drawgantt-svg/?filter=chuc%20only chuc] ==
Line 612: Line 636:
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
| valign="top" style="background-color: #f9f9f9; padding: 0px 10px 0px 3px;" |'''Storage:'''
|  
|  
* disk0, 1.92&nbsp;TB SSD SAS HPE MO001920RXRRH (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x5000c500ec8d70f5-lun-0</code>)  (primary disk)<br />
* disk0, 1.92&nbsp;TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk0</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee22f09d8e-lun-0</code>)  (primary disk)<br />
* disk1, 1.92&nbsp;TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0</code>)  <br />
* disk1, 1.92&nbsp;TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk1</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0</code>)  <br />
* disk2, 1.92&nbsp;TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0</code>)  <br />
* disk2, 1.92&nbsp;TB SSD SAS HPE VO001920RZWUV (dev: <code class="file">/dev/disk2</code>, by-path: <code class="file">/dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0</code>)  <br />
Line 664: Line 688:
| 4&nbsp;x&nbsp;Nvidia A100-SXM4-40GB&nbsp;(40&nbsp;GiB)<br>Compute&nbsp;capability:&nbsp;8.0<br/>
| 4&nbsp;x&nbsp;Nvidia A100-SXM4-40GB&nbsp;(40&nbsp;GiB)<br>Compute&nbsp;capability:&nbsp;8.0<br/>
|-
|-
|}''<small>Last generated from the Grid'5000 Reference API on 2024-02-26 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/35417c85d5 commit 35417c85d5])</small>''
|}''<small>Last generated from the Grid'5000 Reference API on 2024-05-06 ([https://gitlab.inria.fr/grid5000/reference-repository/commit/26f2bceb0b commit 26f2bceb0b])</small>''

Latest revision as of 09:49, 6 May 2024

See also: Network topology for Lille

Summary

  • 4 clusters
  • 29 nodes
  • 1024 CPU cores
  • 48 GPUs
  • 284672 GPUs cores
  • 9.0 TiB RAM
  • 71 SSDs and 48 HDDs on nodes (total: 313.02 TB)
  • 57.8 TFLOPS (excluding GPUs)

Clusters summary

Default queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
chiclet 2018-08-06 2018-07-27 8 2 AMD EPYC 7301 16 cores/CPU x86_64 128 GiB 480 GB SSD + 2 x 4.0 TB HDD* 2 x 25 Gbps 
chifflot 2018-08-01 2018-07-17 8 2 Intel Xeon Gold 6126 12 cores/CPU x86_64 192 GiB 480 GB SSD + 480 GB SSD* + 4 x 4.0 TB HDD* 2 x 25 Gbps  [1-6]: 2 x Nvidia Tesla P100 (16 GiB)
[7-8]: 2 x Nvidia Tesla V100 (32 GiB)

*: disk is reservable      **: crossed GPUs are not supported by Grid'5000 default environments

Testing queue resources

Cluster Access Condition Date of arrival Manufacturing date Nodes CPU Memory Storage Network Accelerators
# Name Cores Architecture
chirop testing queue 2024-01-25 2023-05-02 5 2 Intel Xeon Platinum 8358 32 cores/CPU x86_64 512 GiB 1.92 TB SSD + 2 x 3.84 TB SSD 2 x 25 Gbps 
chuc testing queue 2024-01-22 2023-05-02 8 1 AMD EPYC 7513 32 cores/CPU x86_64 512 GiB 1.92 TB SSD + 3 x 1.92 TB SSD 2 x 25 Gbps (SR‑IOV)  4 x Nvidia A100 (40 GiB)

*: disk is reservable      **: crossed GPUs are not supported by Grid'5000 default environments

Clusters in the default queue

chiclet

8 nodes, 16 cpus, 256 cores (json)

Reservation example:

Terminal.png flille:
oarsub -p chiclet -I
Model: Dell PowerEdge R7425
Manufacturing date: 2018-07-27
Date of arrival: 2018-08-06
CPU: AMD EPYC 7301 (Zen), x86_64, 2 CPUs/node, 16 cores/CPU
Memory: 128 GiB
Storage:
  • disk0, 480 GB SSD SAS Toshiba PX05SVB048Y (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:61:00.0-scsi-0:0:2:0) (reservable)
Network:
  • eth0/enp98s0f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/enp98s0f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth3/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth4/eno3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
  • eth5/eno4, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

chifflot

8 nodes, 16 cpus, 192 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flille:
oarsub -p chifflot -I
chifflot-[1,4-5] (3 nodes, 6 cpus, 72 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-2 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-3 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Toshiba MG04SCA40ENY (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-6 (1 node, 2 cpus, 24 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Seagate ST4000NM0295 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Toshiba MG08SDA400NY (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla P100-PCIE-16GB (16 GiB)
Compute capability: 6.0

chifflot-[7-8] (2 nodes, 4 cpus, 48 cores)
Model: Dell PowerEdge R740
Manufacturing date: 2018-07-17
Date of arrival: 2018-08-01
CPU: Intel Xeon Gold 6126 (Skylake), x86_64, 2.60GHz, 2 CPUs/node, 12 cores/CPU
Memory: 192 GiB
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:0:0) (primary disk)
  • disk1, 480 GB SSD SATA Intel SSDSC2KG480G7R (dev: /dev/disk1*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:1:0) (reservable)
  • disk2, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk2*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0) (reservable)
  • disk3, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk3*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:3:0) (reservable)
  • disk4, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk4*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0) (reservable)
  • disk5, 4.0 TB HDD SAS Hitachi HUS726040ALS210 (dev: /dev/disk5*, by-path: /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:5:0) (reservable)
Network:
  • eth0/ens5f0, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e
  • eth1/ens5f1, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller XXV710 for 25GbE SFP28, driver: i40e (multi NICs example)
  • eth2/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth3/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth4/eno3, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
  • eth5/eno4, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
GPU: 2 x Nvidia Tesla V100-PCIE-32GB (32 GiB)
Compute capability: 7.0

Clusters in the testing queue

chirop

5 nodes, 10 cpus, 320 cores (json)

Reservation example:

Terminal.png flille:
oarsub -q testing -p chirop -I
Access condition: testing queue
Model: DL360 Gen10+
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-25
CPU: Intel Xeon Platinum 8358 (Ice Lake), x86_64, 2.60GHz, 2 CPUs/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD NVME Kioxia KCD6XLUL1T92 (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:47:00.0-scsi-0:2:1:0) (primary disk)
  • disk1, 3.84 TB SSD SATA HP VK003840GWSRV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:47:00.0-scsi-0:2:2:0)
  • disk2, 3.84 TB SSD SATA HP VK003840GWSRV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:47:00.0-scsi-0:2:3:0)
Network:
  • eth0/ens10f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en
  • eth1/ens10f1np1, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en (multi NICs example)

chuc

8 nodes, 8 cpus, 256 cores, split as follows due to differences between nodes (json)

Reservation example:

Terminal.png flille:
oarsub -q testing -p chuc -I
chuc-1 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df66-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df6e-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df72-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df7a-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-2 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df46-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df6a-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df8a-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df8e-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-3 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df2e-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df36-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df5a-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df62-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-4 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df4a-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df4e-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df52-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df56-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-5 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df5e-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df96-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfa6-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfae-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-6 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfaa-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfb2-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfb6-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfda-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-7 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee22f09d8e-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df92-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df9a-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281dfa2-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

chuc-8 (1 node, 1 cpu, 32 cores)
Access condition: testing queue
Model: Apollo 6500 Gen10 Plus
Manufacturing date: 2023-05-02
Date of arrival: 2024-01-22
CPU: AMD EPYC 7513 (Zen 3), x86_64, 1 CPU/node, 32 cores/CPU
Memory: 512 GiB
Storage:
  • disk0, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk0, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df76-lun-0) (primary disk)
  • disk1, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk1, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df7e-lun-0)
  • disk2, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk2, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df82-lun-0)
  • disk3, 1.92 TB SSD SAS HPE VO001920RZWUV (dev: /dev/disk3, by-path: /dev/disk/by-path/pci-0000:c8:00.0-sas-0x58ce38ee2281df86-lun-0)
Network:
  • eth0/ens15f0np0, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled
  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27800 Family [ConnectX-5], driver: mlx5_core, SR-IOV enabled (multi NICs example)
GPU: 4 x Nvidia A100-SXM4-40GB (40 GiB)
Compute capability: 8.0

Last generated from the Grid'5000 Reference API on 2024-05-06 (commit 26f2bceb0b)