Difference between revisions of "Grenoble:Hardware"
From Grid5000
Line 34: | Line 34: | ||
'''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json]) | '''32 nodes, 64 cpus, 1024 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/dahu/nodes.json?pretty=1 json]) | ||
− | + | '''Reservation example:''' | |
− | |||
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='dahu'"</code> <code>-I</code>}} | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='dahu'"</code> <code>-I</code>}} | ||
Line 70: | Line 69: | ||
'''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json]) | '''4 nodes, 8 cpus, 128 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/troll/nodes.json?pretty=1 json]) | ||
− | + | '''Reservation example:''' | |
− | |||
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='troll'"</code> <code>-I</code>}} | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='troll'"</code> <code>-I</code>}} | ||
Line 105: | Line 103: | ||
'''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | '''4 nodes, 16 cpus, 256 cores,''' split as follows due to differences between nodes ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/yeti/nodes.json?pretty=1 json]) | ||
− | + | '''Reservation example:''' | |
− | |||
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='yeti'"</code> <code>-I</code>}} | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="env">-p "cluster='yeti'"</code> <code>-I</code>}} | ||
Line 184: | Line 181: | ||
'''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json]) | '''12 nodes, 24 cpus, 240 cores''' ([https://public-api.grid5000.fr/stable/sites/grenoble/clusters/drac/nodes.json?pretty=1 json]) | ||
− | + | '''Reservation example:''' | |
− | |||
{{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="replace">-t exotic</code> <code class="env">-p "cluster='drac'"</code> <code>-I</code>}} | {{Term|location=fgrenoble|cmd=<code class="command">oarsub</code> <code class="replace">-q testing</code> <code class="replace">-t exotic</code> <code class="env">-p "cluster='drac'"</code> <code>-I</code>}} |
Revision as of 15:48, 16 December 2020
Summary
4 clusters, 52 nodes, 1648 cores, 98.4 TFLOPS
Cluster | Access Condition | Date of arrival | Nodes | CPU | Cores | Memory | Storage | Network | Accelerators |
---|---|---|---|---|---|---|---|---|---|
dahu | 2018-03-22 | 32 | 2 x Intel Xeon Gold 6130 | 16 cores/CPU | 192 GiB | 240 GB SSD + 480 GB SSD + 4.0 TB HDD | 10 Gbps + 100 Gbps Omni-Path | ||
drac | testing queue, exotic job type |
2020-10-05 | 12 | 2 x POWER8NVL 1.0 | 10 cores/CPU | 128 GiB | 1 x 1.0 TB HDD + 1 x 1.0 TB HDD | 10 Gbps + 2 x 100 Gbps InfiniBand | 4 x Nvidia Tesla P100 |
troll | 2019-12-23 | 4 | 2 x Intel Xeon Gold 5218 | 16 cores/CPU | 384 GiB + 1.5 TiB PMEM | 480 GB SSD + 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path | ||
yeti | 2018-01-16 | 4 | 4 x Intel Xeon Gold 6130 | 16 cores/CPU | 768 GiB | 480 GB SSD + 3 x 2.0 TB HDD* + 1.6 TB SSD | 10 Gbps + 100 Gbps Omni-Path |
*: disk is reservable
Clusters in default queue
dahu
32 nodes, 64 cpus, 1024 cores (json) Reservation example:
Model: | Dell PowerEdge C6420 |
Date of arrival: | 2018-03-22 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 192 GiB |
Storage: |
|
Network: |
|
troll
4 nodes, 8 cpus, 128 cores (json) Reservation example:
Model: | Dell PowerEdge R640 |
Date of arrival: | 2019-12-23 |
CPU: | Intel Xeon Gold 5218 (Cascade Lake-SP, 2.30GHz, 2 CPUs/node, 16 cores/CPU) |
Memory: | 384 GiB + 1.5 TiB PMEM |
Storage: |
|
Network: |
|
yeti
4 nodes, 16 cpus, 256 cores, split as follows due to differences between nodes (json) Reservation example:
- yeti-[1-2,
4] (3 nodes, 12 cpus, 192 cores)
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier |
Network: |
|
- yeti-3 (1 node, 4 cpus, 64 cores)
Model: | Dell PowerEdge R940 |
Date of arrival: | 2018-01-16 |
CPU: | Intel Xeon Gold 6130 (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU) |
Memory: | 768 GiB |
Storage: |
*: the disk block device name /dev/sd? may vary in deployed environments, prefer referring to the by-path identifier |
Network: |
|
Clusters in testing queue
drac
12 nodes, 24 cpus, 240 cores (json) Reservation example:
Access condition: | testing queue, exotic job type |
Model: | IBM PowerNV S822LC (8335-GTB) |
Date of arrival: | 2020-10-05 |
CPU: | POWER8NVL 1.0 (POWER8, 2 CPUs/node, 10 cores/CPU) |
Memory: | 128 GiB |
Storage: |
|
Network: |
|
GPU: | 4 x Nvidia Tesla P100-SXM2-16GB |