Grenoble:Hardware: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 32: | Line 32: | ||
|} | |} | ||
''*: disk is [[Disk_reservation|reservable]]'' | ''*: disk is [[Disk_reservation|reservable]]'' | ||
= Clusters in default queue = | = Clusters in the [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/ default queue] = | ||
== dahu == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=dahu%20only dahu] == | ||
'''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json]) | '''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json]) | ||
Line 70: | Line 70: | ||
|} | |} | ||
== drac == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=drac%20only drac] == | ||
'''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json]) | '''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json]) | ||
Line 116: | Line 116: | ||
|} | |} | ||
== troll == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=troll%20only troll] == | ||
'''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json]) | '''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json]) | ||
Line 154: | Line 154: | ||
|} | |} | ||
== yeti == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=yeti%20only yeti] == | ||
'''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | '''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | ||
Line 275: | Line 275: | ||
|} | |} | ||
= Clusters in testing queue = | = Clusters in the testing queue = | ||
== servan == | == [https://intranet.grid5000.fr/oar/Grenoble/drawgantt-svg/?filter=servan%20only servan] == | ||
'''2 nodes, 4 cpus, 96 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/servan/nodes.json?pretty=1 json]) | '''2 nodes, 4 cpus, 96 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/servan/nodes.json?pretty=1 json]) |
Revision as of 13:47, 30 March 2022
See also: Network topology for Grenoble
Summary
5 clusters, 54 nodes, 1744 cores, 101.9 TFLOPS
Cluster | Access Condition | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network | Accelerators |
---|---|---|---|---|---|---|---|---|---|
dahu | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | exotic job type | 2020-10-05 | 12 | 2 x Power POWER8NVL 1.0 | 10 cores/CPU | 128 GiB | 1 x 1.0 TB HDD + 1 x 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 (16 GiB) |
servan | testing queue, exotic job type |
2021-12-15 | 2 | 2 x AMD EPYC 7352 | 24 cores/CPU | 128 GiB | 1 x 1.6 TB SSD + 1 x 1.6 TB SSD | 25 Gbps | |
troll | exotic job type | 2019-12-23 | 4 | 2 x Intel Xeon Gold 5218 | 16 cores/CPU | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 25 Gbps (SR‑IOV) + 100 Gbps Omni-Path | |
yeti | exotic job type | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 2 x 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
Clusters in the default queue
dahu
32 nodes, 64 cpus, 1024 cores (json)
Reservation example:
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
drac
12 nodes, 24 cpus, 240 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB (16 GiB) Compute capability: 6.0 |
troll
4 nodes, 8 cpus, 128 cores (json)
Reservation example:
Access condition: | exotic job type |
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json)
Reservation example:
- yeti-1 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-[2,
4] (2 nodes, 8 cpus, 128 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Access condition: | exotic job type |
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
|
Network: |
|
Clusters in the testing queue
servan
2 nodes, 4 cpus, 96 cores (json)
Reservation example:
Access condition: | testing queue, exotic job type |
Model: | Dell PowerEdge R7525 |
Date of arrival: | 2021-12-15 |
CPU: | AMD EPYC 7352 (Zen 2, 2 CPUs/node, 24 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
Last generated from the Grid'5000 Reference API on 2022-03-30 (commit 07ffde5274)