Nancy:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
(130 intermediate revisions by 10 users not shown)
Line 1: Line 1:
{{Template:Site link|Network}}
{{Template:Site link|Network}}
{{Portal|Network}}
{{Portal|Network}}
{{Portal|User}}
= Overview of Ethernet network topology =


= IP networks in use =
[[File:NancyNetwork.png|1200px]]


You have to use a public network range to run an experiment between several Grid5000 sites.
{{:Nancy:GeneratedNetwork}}


=== Public Networks ===
= HPC Networks =


* computing : '''172.28.52.0/22'''
Several HPC Networks are available.
* virtual : '''10.144.0.0/14'''


=== Local Networks ===
== Omni-Path 100G on grele and grimani nodes  ==


* admin : '''172.28.152.0/22'''
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.


= Network =
* Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
[[Image:Nancy production network.png|Experiment network|902px]]


== Adressing ==
== Omni-Path 100G on grvingt nodes  ==


{| class="wikitable" style="text-align:center; width:80%;"
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1).
|+ Table d'adresse
Topology, generated from <code>opareports -o topology</code>:
|-
! scope=col | cluster
! scope=col | switch
! scope=col | nodes number
! scope=col | address range
|-
|griffon
|sgriffon1
|29
|172.28.54.1-29
|-
|griffon
|sgriffon2
|27
|172.28.54.30-57
|-
|griffon
|sgriffon3
|36
|172.28.54.58-92
|-
|grelon
|sgrelon1
|24
|172.28.54.101-124
|-
|grelon
|sgrelon2
|24
|172.28.54.125-148
|-
|grelon
|sgrelon3
|24
|172.28.54.149-172
|-
|grelon
|sgrelon4
|24
|172.28.54.173-196
|-
|grelon
|sgrelon5
|24
|172.28.54.197-220
|}


[[File:Topology-grvingt.png|400px]]


= Management network =
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial.
The following DEPRECATED diagram show the experiment network and the administration network, to have a globlal view. Services node connections are logical.


[[Image:Reseau_admin.png|Admin network - DEPRECATED|900px]]
=== Switch ===


* Infiniband Switch 4X DDR
* Model based on [http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=16&menu_section=33 Infiniscale_III]
* 1 commutation card Flextronics F-X43M204
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203


The following diagram show a view of the central router, a hp procurve 5406zl, named sgravillon1.
=== Interconnection ===
* All internal links are 10GB CX4.
* Renater link is 10GB optical fiber.


[[Image:routeur-nancy.png|600px]]
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.


= Grid5000 interconnect =
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
== Interconnect type ==
[[Image:nancy_g5k_interconnect.png|Nancy Grid'5000 interconnect]]


== Link details ==
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
[[Image:nancy_g5k_linking.png|Nancy's Grid'5000 linking]]
*<code class="host">grimoire-[1-8]</code> have one 56GB Infiniband card.
*<code class="host">graphite-[1-4]</code> have one 56GB Infiniband card.
*<code class="host">grisou-[50-51]</code> have one 56GB Infiniband card.


'''Note''': All the used fiber cables are dedicated to our Grid'5000 interconnect. When it is said ''trunk'' on the above figure it means a ''more rigid garter''.
* Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( [http://www.mellanox.com/related-docs/user_manuals/ConnectX-3_VPI_Single_and_Dual_QSFP_Port_Adapter_Card_User_Manual.pdf ConnectX-3] ).
* Driver : <code class="dir">mlx4_core</code>
* OAR property : ib_rate='56'
* IP over IB addressing : <code class="host">graoully-[1-16]-ib0</code>.nancy.grid5000.fr ( 172.18.70.[1-16] )
* IP over IB addressing : <code class="host">grimoire-[1-8]-ib0</code>.nancy.grid5000.fr ( 172.18.71.[1-8] )
* IP over IB addressing : <code class="host">graphite-[1-4]-ib0</code>.nancy.grid5000.fr ( 172.16.68.[9-12] )
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )


= Loria interconnect =
=== Switch ===
== Interconnect type ==
[[Image:nancy_loria_interconnect.png|Nancy's Loria interconnect]]


== Link details ==
* 36-port Mellanox InfiniBand SX6036
[[Image:nancy_loria_linking.png|Nancy's Loria linking]]
* [http://www.mellanox.com/page/products_dyn?product_family=132 Documentation]
* 36 FDR (56Gb/s) ports in a 1U switch
* 4.032Tb/s switching capacity
* FDR/FDR10 support for Forward Error Correction (FEC)
 
=== Interconnection ===
 
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Revision as of 17:15, 2 December 2019

Overview of Ethernet network topology

NancyNetwork.png


Network devices models

  • gw: Cisco Nexus 9508
  • sgraoullyib: Infiniband
  • sgrappe: Dell S5224F-ON
  • sgrele-opf: Omni-Path
  • sgros1: Dell Z9264F-ON
  • sgros2: Dell Z9264F-ON
  • sgruss: Dell S5224F-ON
  • sgrvingt: Dell S4048

More details (including address ranges) are available from the Grid5000:Network page.

HPC Networks

Several HPC Networks are available.

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Omni-Path 100G on grvingt nodes

There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers. The topology is a non blocking fat tree (1:1). Topology, generated from opareports -o topology:

Topology-grvingt.png

More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.