Difference between revisions of "Nancy:Network"

From Grid5000
Jump to: navigation, search
 
(6 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
{{Portal|Network}}
 
{{Portal|Network}}
 
{{Portal|User}}
 
{{Portal|User}}
= Ethernet network=
+
= Overview of Ethernet network topology =
== Overview of topology ==
+
 
[[File:NancyNetwork.png|900px]]
+
[[File:NancyNetwork.png|1200px]]
 +
 
 +
{{:Nancy:GeneratedNetwork}}
  
 
= HPC Networks =
 
= HPC Networks =
 +
 +
Several HPC Networks are available.
  
 
== Omni-Path 100G on grele and grimani nodes  ==
 
== Omni-Path 100G on grele and grimani nodes  ==
Line 13: Line 17:
 
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.
 
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.
  
* Card Model :  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
+
* Card Model:  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
 +
 
 +
== Omni-Path 100G on grvingt nodes  ==
 +
 
 +
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers.
 +
 
 +
Topology, generated from <code>opareports -o topology</code>:
 +
 
 +
[[File:Topology-grvingt.png|400px]]
 +
 
 +
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial.
  
 
== Infiniband 20G on griffon nodes  ==
 
== Infiniband 20G on griffon nodes  ==
Line 19: Line 33:
  
 
== Infiniband 20G on graphene nodes  ==
 
== Infiniband 20G on graphene nodes  ==
 
  
 
*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
 
*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
Line 37: Line 50:
 
=== Interconnection ===
 
=== Interconnection ===
  
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
+
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
  
 
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
 
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
Line 64: Line 77:
 
=== Interconnection ===
 
=== Interconnection ===
  
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
+
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Latest revision as of 22:18, 5 July 2018

Overview of Ethernet network topology

NancyNetwork.png


Network devices models

  • gw-nancy: Cisco Nexus 9508
  • sgraoullyib: Infiniband
  • sgraphene1: 3com 4510G
  • sgraphene2: 3com 4510G
  • sgraphene3: 3com 4510G
  • sgraphene4: 3com 4510G
  • sgrapheneib: Infiniband
  • sgravillon1: HP Procurve 5406zl J8697A
  • sgrcinq: Cisco WS-C2960X-48TD-L
  • sgrele-opf: Omni-Path
  • sgriffon1: 3com 4500g
  • sgrisou1: Dell S3048
  • sgrvingt: Dell S4048

More details (including address ranges are available from the Grid5000:Network page.

HPC Networks

Several HPC Networks are available.

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Omni-Path 100G on grvingt nodes

There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers.

Topology, generated from opareports -o topology:

Topology-grvingt.png

More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.

Infiniband 20G on griffon nodes

Infiniband has been removed from these nodes

Infiniband 20G on graphene nodes

  • graphene-1 to graphene-144 have one 20GB Infiniband card.
  • Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( ConnectX ).
  • Driver : mlx4_ib
  • OAR property : ib_rate=20
  • IP over IB addressing : graphene-[1..144]-ib0.nancy.grid5000.fr ( 172.18.64.[1..144] )

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.