Nancy:Network: Difference between revisions
No edit summary |
|||
(55 intermediate revisions by 6 users not shown) | |||
Line 2: | Line 2: | ||
{{Portal|Network}} | {{Portal|Network}} | ||
{{Portal|User}} | {{Portal|User}} | ||
= Overview of Ethernet network topology = | |||
[[File:NancyNetwork.png|1200px]] | |||
{{:Nancy:GeneratedNetwork}} | |||
= | = HPC Networks = | ||
Several HPC Networks are available. | |||
== | == Omni-Path 100G on grele and grimani nodes == | ||
* | *<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card. | ||
* | *<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card. | ||
* Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8 | |||
== | == Omni-Path 100G on grvingt nodes == | ||
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. | |||
Topology, generated from <code>opareports -o topology</code>: | |||
[[File:Topology-grvingt.png|400px]] | |||
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial. | |||
== | == Infiniband 20G on griffon nodes == | ||
''Infiniband has been removed from these nodes'' | |||
== Infiniband 20G on graphene nodes == | |||
*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card. | *<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card. | ||
Line 137: | Line 38: | ||
* Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( [http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=4&menu_section=41 ConnectX] ). | * Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( [http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=4&menu_section=41 ConnectX] ). | ||
* Driver : <code class="dir">mlx4_ib</code> | * Driver : <code class="dir">mlx4_ib</code> | ||
* OAR property : | * OAR property : ib_rate=20 | ||
* IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] ) | * IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] ) | ||
=== Switch === | |||
* Infiniband Switch 4X DDR | * Infiniband Switch 4X DDR | ||
Line 147: | Line 48: | ||
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203 | * 12 line cards 4X 12 ports DDR Flextronics F-X43M203 | ||
=== Interconnection === | |||
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers. | |||
== | == Infiniband 56G on graphite/graoully/grimoire/grisou nodes == | ||
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card. | |||
* | *<code class="host">grimoire-[1-8]</code> have one 56GB Infiniband card. | ||
* | *<code class="host">graphite-[1-4]</code> have one 56GB Infiniband card. | ||
* | *<code class="host">grisou-[50-51]</code> have one 56GB Infiniband card. | ||
= | * Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( [http://www.mellanox.com/related-docs/user_manuals/ConnectX-3_VPI_Single_and_Dual_QSFP_Port_Adapter_Card_User_Manual.pdf ConnectX-3] ). | ||
== | * Driver : <code class="dir">mlx4_core</code> | ||
[[ | * OAR property : ib_rate='56' | ||
* IP over IB addressing : <code class="host">graoully-[1-16]-ib0</code>.nancy.grid5000.fr ( 172.18.70.[1-16] ) | |||
* IP over IB addressing : <code class="host">grimoire-[1-8]-ib0</code>.nancy.grid5000.fr ( 172.18.71.[1-8] ) | |||
* IP over IB addressing : <code class="host">graphite-[1-4]-ib0</code>.nancy.grid5000.fr ( 172.16.68.[9-12] ) | |||
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] ) | |||
== | === Switch === | ||
* 36-port Mellanox InfiniBand SX6036 | |||
* [http://www.mellanox.com/page/products_dyn?product_family=132 Documentation] | |||
* 36 FDR (56Gb/s) ports in a 1U switch | |||
* 4.032Tb/s switching capacity | |||
* FDR/FDR10 support for Forward Error Correction (FEC) | |||
= | === Interconnection === | ||
== | |||
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers. | |||
Revision as of 23:18, 5 July 2018
Overview of Ethernet network topology
Network devices models
- gw: Cisco Nexus 9508
- sgraoullyib: Infiniband
- sgrappe: Dell S5224F-ON
- sgrele-opf: Omni-Path
- sgros1: Dell Z9264F-ON
- sgros2: Dell Z9264F-ON
- sgruss: Dell S5224F-ON
- sgrvingt: Dell S4048
More details (including address ranges) are available from the Grid5000:Network page.
HPC Networks
Several HPC Networks are available.
Omni-Path 100G on grele and grimani nodes
grele-1
togrele-14
have one 100GB Omni-Path card.grimani-1
togrimani-6
have one 100GB Omni-Path card.
- Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
Omni-Path 100G on grvingt nodes
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers.
Topology, generated from opareports -o topology
:
More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.
Infiniband 20G on griffon nodes
Infiniband has been removed from these nodes
Infiniband 20G on graphene nodes
graphene-1
tographene-144
have one 20GB Infiniband card.
- Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( ConnectX ).
- Driver :
mlx4_ib
- OAR property : ib_rate=20
- IP over IB addressing :
graphene-[1..144]-ib0
.nancy.grid5000.fr ( 172.18.64.[1..144] )
Switch
- Infiniband Switch 4X DDR
- Model based on Infiniscale_III
- 1 commutation card Flextronics F-X43M204
- 12 line cards 4X 12 ports DDR Flextronics F-X43M203
Interconnection
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
Infiniband 56G on graphite/graoully/grimoire/grisou nodes
graoully-[1-16]
have one 56GB Infiniband card.grimoire-[1-8]
have one 56GB Infiniband card.graphite-[1-4]
have one 56GB Infiniband card.grisou-[50-51]
have one 56GB Infiniband card.
- Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
- Driver :
mlx4_core
- OAR property : ib_rate='56'
- IP over IB addressing :
graoully-[1-16]-ib0
.nancy.grid5000.fr ( 172.18.70.[1-16] ) - IP over IB addressing :
grimoire-[1-8]-ib0
.nancy.grid5000.fr ( 172.18.71.[1-8] ) - IP over IB addressing :
graphite-[1-4]-ib0
.nancy.grid5000.fr ( 172.16.68.[9-12] ) - IP over IB addressing :
grisou-[50-51]-ib0
.nancy.grid5000.fr ( 172.16.72.[50-51] )
Switch
- 36-port Mellanox InfiniBand SX6036
- Documentation
- 36 FDR (56Gb/s) ports in a 1U switch
- 4.032Tb/s switching capacity
- FDR/FDR10 support for Forward Error Correction (FEC)
Interconnection
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.