Difference between revisions of "Nancy:Network"

From Grid5000
Jump to: navigation, search
(Grid5000 interconnect)
 
(152 intermediate revisions by 15 users not shown)
Line 1: Line 1:
 
{{Template:Site link|Network}}
 
{{Template:Site link|Network}}
 +
{{Portal|Network}}
 +
{{Portal|User}}
  
 +
'''See also:''' [[Nancy:Hardware|Hardware description for Nancy]]
  
= Experiment network =
+
= Overview of Ethernet network topology =
[[Image:Nancy-network.png]]
 
  
= Management network =
+
[[File:NancyNetwork.png|1200px]]
No network diagram available.
 
  
= Grid5000 interconnect =
+
{{:Nancy:GeneratedNetwork}}
== Interconnect type ==
 
[[Image:nancy_g5k_interconnect.png|Nancy Grid'5000 interconnect]]
 
  
== Link details ==
+
= HPC Networks =
[[Image:nancy_g5k_linking.png|Nancy's Grid'5000 linking]]
 
  
'''Note''': All the used fiber cables are dedicated to our Grid'5000 interconnect. When it is said ''trunk'' on the above figure it means a ''more rigid garter''.
+
Several HPC Networks are available.
  
= Loria interconnect =
+
== Omni-Path 100G on grele and grimani nodes  ==
  
== Topology ==
+
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
The Loria laboratory possess a 1Gbit Ethernet network. Service nodes of the cluster are connected to this network. So the 24-port experiment switch, which connects service nodes, makes the uplink to this network.
+
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.
  
[[Image:nancy_loria_interconnect.png|center|Nancy's Loria interconnect]]
+
* Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
  
Nancy's Grid'5000 cluster is hosted in Loria building. So cabling distance is short. 1000BASE-T technology is used with an RJ-45 CAT6 cable. The 24-port experiment switch is directly linked to the Loria router.
+
== Omni-Path 100G on grvingt nodes  ==
  
[[Image:nancy_loria_linking.png|center|Nancy's Loria linking]]
+
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1).
 +
Topology, generated from <code>opareports -o topology</code>:
  
 +
[[File:Topology-grvingt.png|400px]]
  
== Addressing ==
+
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial.
Loria uses a public IP address range for its materials : <code class="host">152.81.0.0/16</code>. Within this range, the subnet <code class="host">152.81.45.0/24</code> is dedicated to Grid'5000 cluster:
 
* <code class="host">152.81.45.[101-1??]</code> are dedicated to service node first experiment interface
 
* <code class="host">152.81.44.[201-2??]</code> are dedicated to service node management interface
 
  
{{Note|text=Service node first experiment interfaces possess 2 IP addresses:
+
=== Switch ===
* <code>eth0</code> for cluster's intraconnect: <code>172.28.53.[1-X]</code>
 
* <code>eth0:0</code> for Loria's interconnect: <code>152.81.45.[101-1XX]</code>}}
 
  
== Naming ==
+
* Infiniband Switch 4X DDR
Service node interfaces connected to Loria network are also named on it. These names are gathered into <code class="domain">grid5000.loria.fr</code> subdomain :
+
* Model based on [http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=16&menu_section=33 Infiniscale_III]
* <code class="host">fgrillon[1-??].grid5000.loria.fr</code> are dedicated to service node second experiment interface
+
* 1 commutation card Flextronics F-X43M204
:'''Note:''' <code class="host">fgrillon.grid5000.loria.fr</code>, <code class="host">grillon.grid5000.loria.fr</code> and <code class="host">acces.nancy.grid5000.fr</code> are also used for default service node.
+
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203
* <code class="host">fgrillade[1-??].grid5000.loria.fr</code> are dedicated to service node management interface
+
 
 +
=== Interconnection ===
 +
 
 +
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
 +
 
 +
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
 +
 
 +
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
 +
*<code class="host">grimoire-[1-8]</code> have one 56GB Infiniband card.
 +
*<code class="host">graphite-[1-4]</code> have one 56GB Infiniband card.
 +
*<code class="host">grisou-[50-51]</code> have one 56GB Infiniband card.
 +
 
 +
* Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( [http://www.mellanox.com/related-docs/user_manuals/ConnectX-3_VPI_Single_and_Dual_QSFP_Port_Adapter_Card_User_Manual.pdf ConnectX-3] ).
 +
* Driver : <code class="dir">mlx4_core</code>
 +
* OAR property : ib_rate='56'
 +
* IP over IB addressing : <code class="host">graoully-[1-16]-ib0</code>.nancy.grid5000.fr ( 172.18.70.[1-16] )
 +
* IP over IB addressing : <code class="host">grimoire-[1-8]-ib0</code>.nancy.grid5000.fr ( 172.18.71.[1-8] )
 +
* IP over IB addressing : <code class="host">graphite-[1-4]-ib0</code>.nancy.grid5000.fr ( 172.16.68.[9-12] )
 +
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )
 +
 
 +
=== Switch ===
 +
 
 +
* 36-port Mellanox InfiniBand SX6036
 +
* [http://www.mellanox.com/page/products_dyn?product_family=132 Documentation]
 +
* 36 FDR (56Gb/s) ports in a 1U switch
 +
* 4.032Tb/s switching capacity
 +
* FDR/FDR10 support for Forward Error Correction (FEC)
 +
 
 +
=== Interconnection ===
 +
 
 +
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Latest revision as of 11:08, 26 April 2021

See also: Hardware description for Nancy

Overview of Ethernet network topology

NancyNetwork.png


Network devices models

  • gw: Cisco Nexus 9508
  • sgraoullyib: Infiniband
  • sgrappe: Dell S5224F-ON
  • sgrcinq: Cisco WS-C2960X-48TD-L
  • sgrele-opf: Omni-Path
  • sgrisou1: Dell S3048
  • sgros1: Dell Z9264F-ON
  • sgros2: Dell Z9264F-ON
  • sgrvingt: Dell S4048

More details (including address ranges) are available from the Grid5000:Network page.

HPC Networks

Several HPC Networks are available.

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Omni-Path 100G on grvingt nodes

There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1). Topology, generated from opareports -o topology:

Topology-grvingt.png

More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.