Difference between revisions of "Nancy:Network"

From Grid5000
Jump to: navigation, search
(Ethernet 1G on grisou nodes)
(16 intermediate revisions by 2 users not shown)
Line 2: Line 2:
 
{{Portal|Network}}
 
{{Portal|Network}}
 
{{Portal|User}}
 
{{Portal|User}}
{{Maintainer|Clément Parisot}}
+
= Overview of Ethernet network topology =
=Network Topology=
 
<gallery>
 
File:Topo_nancy.svg|[[Nancy_Network_Topology|Automated Network topology from API]]
 
</gallery>
 
= IP networks in use =
 
  
You have to use a public network range to run an experiment between several Grid5000 sites.  
+
[[File:NancyNetwork.png|1200px]]
  
=== Public Networks ===
+
{{:Nancy:GeneratedNetwork}}
  
* computing : '''172.16.64.0/20'''
+
= HPC Networks =
* ib/mx : '''172.18.64.0/20'''
 
* virtual : '''10.144.0.0/14'''
 
For Infinband (ib) see [[Nancy:Network#High_Performance_Networks|High Performance Networks]]
 
  
=== Local Networks ===
+
Several HPC Networks are available.
  
* admin : '''172.17.64.0/20'''
+
== Omni-Path 100G on grele and grimani nodes  ==
* nat : '''192.168.69.0/30'''
 
 
 
= Network =
 
 
 
== Production Network ==
 
=== Room B056 ===
 
[[Image:NetworkNancy2.png|center|Production network]]
 
=== Room C105A (Talc) ===
 
[[Image:NetworkNancyTalc.png|center|Production network Talc]]
 
 
 
== Physical link details ==
 
The following diagram show a view of the central router, a hp procurve 5406zl, named sgravillon1.
 
* All internal links are 10GB CX4.
 
 
 
[[Image:Sgravillon1-graphene.png|center|900px]]
 
 
 
 
 
<table class="program">
 
<tr>
 
    <th>A1</th>
 
    <th>A3</th>
 
    <th>A5</th>
 
    <th>A7</th>
 
    <th>A9</th>
 
    <th>A11</th>
 
    <th>A13</th>
 
    <th>A15</th>
 
    <th>A17</th>
 
    <th>A19</th>
 
    <th>A21</th>
 
    <th>A23</th>
 
</tr>
 
<tr>
 
    <td>sgrapheneib</td>
 
    <td>TALC-adm</td>
 
    <td> - </td>
 
    <td>fgriffon2-ipmi</td>
 
    <td>grog-eth1</td>
 
    <td bgcolor="yellow">fgriffon1-eth0</td>
 
    <td>sgraphene1-ipmi</td>
 
    <td>sgraphene3-ipmi</td>
 
    <td> - </td>
 
    <td> - </td>
 
    <td> - </td>
 
    <td bgcolor="yellow">fgriffon1-eth1</td>
 
</tr>
 
<tr>
 
    <th>A2</th>
 
    <th>A4</th>
 
    <th>A6</th>
 
    <th>A8</th>
 
    <th>A10</th>
 
    <th>A12</th>
 
    <th>A14</th>
 
    <th>A16</th>
 
    <th>A18</th>
 
    <th>A20</th>
 
    <th>A22</th>
 
    <th>A24</th>
 
</tr>
 
    <td></td>
 
    <td></td>
 
    <td></td>
 
    <td>grog-eth0</td>
 
    <td>fgriffon1-ipmi</td>
 
    <td> - </td>
 
    <td>sgraphene2-ipmi</td>
 
    <td>sgraphene4-ipmi</td>
 
    <td></td>
 
    <td>sgriffon1-ipmi</td>
 
    <td> - </td>
 
    <td> - </td>
 
</table>
 
 
 
=== Links color ===
 
==== Networks cables ====
 
* <span style="color: red;">##</span> Red : Production Network
 
* <span style="color: green;">##</span> Green : Managment Network
 
* <span style="color: blue;">##</span> Blue and White : Interco Managment
 
* <span style="color: black;">##</span> Black : Server admin Network
 
==== Table color ====
 
* <span style="color: yellow;">##</span> Yellow : LACP etherchannel link for fgriffon1 (trk3) (2x1Gbps)
 
 
 
== High Performance Networks ==
 
 
 
=== Omni-Path 100G on grele nodes  ===
 
==== Nodes ====
 
  
 
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
 
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
 +
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.
  
* Card Model :  Intel Omni-Path Host Fabric adaptateur séries 100 1 Port PCIe x8
+
* Card Model:  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
* Driver : <code class="dir">?</code>
 
 
 
=== Omni-Path 100G on grimani nodes  ===
 
==== Nodes ====
 
 
 
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 10GB Ethernet card.
 
 
 
* Card Model :  Intel Omni-Path Host Fabric adaptateur séries 100 1 Port PCIe x8
 
* Driver : <code class="dir">?</code>
 
 
 
 
 
=== Ethernet 10G on grele nodes  ===
 
==== Nodes ====
 
 
 
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 10GB Ethernet card.
 
 
 
* Card Model :  Intel X520 DP 10Gbit/s connexion directe/SFP+ + I350 DP 1Gbit/s.
 
* Driver : <code class="dir">ixgbe</code>
 
  
 +
== Omni-Path 100G on grvingt nodes  ==
  
=== Ethernet 10G on graphite nodes ===
+
There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers.
  
==== Nodes ====
+
Topology, generated from <code>opareports -o topology</code>:
  
*<code class="host">graphite-1</code> to <code class="host">graphite-4</code> have one 10GB Ethernet card.
+
[[File:Topology-grvingt.png|400px]]
  
* Card Model :  Intel X520 DP 10Gbit/s connexion directe/SFP+ + I350 DP 1Gbit/s.
+
More information about using Omni-Path with MPI is available from the [[Run_MPI_On_Grid%275000]] tutorial.
* Driver : <code class="dir">ixgbe</code>
 
  
=== Ethernet 10G on grimani nodes  ===
+
== Infiniband 20G on griffon nodes  ==
 +
''Infiniband has been removed from these nodes''
  
==== Nodes ====
+
== Infiniband 20G on graphene nodes  ==
 
 
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 10GB Ethernet card.
 
 
 
* Card Model :  Intel X520, Double port, 10Gb, 1GbE, DA/SFP+, I350 DP
 
* Driver : <code class="dir">ixgbe</code>
 
 
 
=== Infiniband 20G on griffon nodes  ===
 
''Infiniband has been removed from this nodes''
 
 
 
=== Infiniband 20G on graphene nodes  ===
 
 
 
==== Nodes ====
 
  
 
*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
 
*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
Line 164: Line 41:
 
* IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] )
 
* IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] )
  
==== Switch ====
+
=== Switch ===
  
 
* Infiniband Switch 4X DDR
 
* Infiniband Switch 4X DDR
Line 171: Line 48:
 
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203
 
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203
  
==== Interconnection ====
+
=== Interconnection ===
 
 
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
 
 
 
=== Ethernet 10G on griffon nodes ===
 
 
 
==== Nodes ====
 
* Myrinet interface has been disconnected from this nodes
 
*<code class="host">griffon-11</code> and <code class="host">griffon-14</code> have one 10GB Myricom card
 
 
 
* Card Model : Myri-10G ( [http://www.myri.com/Myri-10G/10gbe_solutions.html] ) 10G-PCIE-8B-C NIC
 
* Driver : <code class="dir">myri10ge</code>
 
* ''Myrinet interface is no more used as an 10G Ethernet adapter. See {{Bug|6490}} for more information.''
 
 
 
=== Ethernet 10G on grisou nodes ===
 
  
==== Nodes ====
+
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
  
*<code class="host">grisou-1</code> to <code class="host">grisou-51</code> have 4 10GB SFP+ interfaces
+
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
 
 
* Card Model :
 
** 82599ES 10-Gigabit SFI/SFP+ Network Connection
 
** Ethernet 10G 2P X520 Adapter
 
* Driver : <code class="dir">ixgbe</code>
 
* OAR property : eth_count=4 + eth_count=5 ( [https://helpdesk.grid5000.fr/oar/Nancy/monika.cgi?props=ethnb%3D4&Action=Display+nodes+for+these+properties&.cgifields=props Monika] ).
 
 
 
 
 
=== Ethernet 1G on grisou nodes ===
 
 
 
==== Nodes ====
 
 
 
*<code class="host">grisou-1</code> to <code class="host">grisou-48</code> have 1 1GB SFP+ interfaces
 
 
 
* Card Model :
 
** 82599ES 1-Gigabit SFI/SFP+ Network Connection
 
** I350 Gigabit Network Connection
 
* Driver : <code class="dir">igb</code>
 
* OAR property : eth_count=5 ( [https://helpdesk.grid5000.fr/oar/Nancy/monika.cgi?props=ethnb%3D4&Action=Display+nodes+for+these+properties&.cgifields=props Monika] ).
 
 
 
=== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ===
 
 
 
==== Nodes ====
 
  
 
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
 
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
Line 227: Line 67:
 
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )
 
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )
  
==== Switch ====
+
=== Switch ===
  
 
* 36-port Mellanox InfiniBand SX6036
 
* 36-port Mellanox InfiniBand SX6036
Line 235: Line 75:
 
* FDR/FDR10 support for Forward Error Correction (FEC)  
 
* FDR/FDR10 support for Forward Error Correction (FEC)  
  
==== Interconnection ====
+
=== Interconnection ===
 
 
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
 
 
 
= Grid5000 interconnect =
 
== Interconnect type ==
 
[[Image:nancy_g5k_interconnect.png|Nancy Grid'5000 interconnect]]
 
 
 
== Link details ==
 
[[Image:nancy_g5k_linking.png|Nancy's Grid'5000 linking]]
 
 
 
'''Note''': All the used fiber cables are dedicated to our Grid'5000 interconnect. When it is said ''trunk'' on the above figure it means a ''more rigid garter''.
 
 
 
= Loria interconnect =
 
== Interconnect type ==
 
[[Image:nancy_loria_interconnect.png|Nancy's Loria interconnect]]
 
  
== Link details ==
+
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.
[[Image:nancy_loria_linking.png|Nancy's Loria linking]]
 

Revision as of 23:18, 5 July 2018

Overview of Ethernet network topology

NancyNetwork.png


Network devices models

  • gw-nancy: Cisco Nexus 9508
  • sgraoullyib: Infiniband
  • sgrcinq: Cisco WS-C2960X-48TD-L
  • sgrele-opf: Omni-Path
  • sgrisou1: Dell S3048
  • sgrvingt: Dell S4048

More details (including address ranges are available from the Grid5000:Network page.

HPC Networks

Several HPC Networks are available.

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model: Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Omni-Path 100G on grvingt nodes

There's another, separate Omni-Path network connecting the 64 grvingt nodes and some servers.

Topology, generated from opareports -o topology:

Topology-grvingt.png

More information about using Omni-Path with MPI is available from the Run_MPI_On_Grid'5000 tutorial.

Infiniband 20G on griffon nodes

Infiniband has been removed from these nodes

Infiniband 20G on graphene nodes

  • graphene-1 to graphene-144 have one 20GB Infiniband card.
  • Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( ConnectX ).
  • Driver : mlx4_ib
  • OAR property : ib_rate=20
  • IP over IB addressing : graphene-[1..144]-ib0.nancy.grid5000.fr ( 172.18.64.[1..144] )

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the L2 or L3 layers.