Nancy:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
Line 6: Line 6:
[[File:NancyNetwork.png|900px]]
[[File:NancyNetwork.png|900px]]


= IP networks in use =
= HPC Networks =


You have to use a public network range to run an experiment between several Grid5000 sites.
== Omni-Path 100G on grele and grimani nodes  ==
 
=== Public Networks ===
 
* computing : '''172.16.64.0/20'''
* ib/mx : '''172.18.64.0/20'''
* virtual : '''10.144.0.0/14'''
For Infinband (ib) see [[Nancy:Network#High_Performance_Networks|High Performance Networks]]
 
=== Local Networks ===
 
* admin : '''172.17.64.0/20'''
* nat : '''192.168.69.0/30'''
 
= Network =
 
== Production Network ==
=== Room B056 ===
[[Image:NetworkNancy2.png|center|Production network]]
=== Room C105A - Talc (retired) ===
[[Image:NetworkNancyTalc.png|center|Production network Talc]]
 
== Physical link details ==
The following diagram show a view of the central router, a hp procurve 5406zl, named sgravillon1.
* All internal links are 10GB CX4.
 
[[Image:Sgravillon1-graphene.png|center|900px]]
 
 
<table class="program">
<tr>
    <th>A1</th>
    <th>A3</th>
    <th>A5</th>
    <th>A7</th>
    <th>A9</th>
    <th>A11</th>
    <th>A13</th>
    <th>A15</th>
    <th>A17</th>
    <th>A19</th>
    <th>A21</th>
    <th>A23</th>
</tr>
<tr>
    <td>sgrapheneib</td>
    <td>TALC-adm</td>
    <td> - </td>
    <td>fgriffon2-ipmi</td>
    <td>grog-eth1</td>
    <td bgcolor="yellow">fgriffon1-eth0</td>
    <td>sgraphene1-ipmi</td>
    <td>sgraphene3-ipmi</td>
    <td> - </td>
    <td> - </td>
    <td> - </td>
    <td bgcolor="yellow">fgriffon1-eth1</td>
</tr>
<tr>
    <th>A2</th>
    <th>A4</th>
    <th>A6</th>
    <th>A8</th>
    <th>A10</th>
    <th>A12</th>
    <th>A14</th>
    <th>A16</th>
    <th>A18</th>
    <th>A20</th>
    <th>A22</th>
    <th>A24</th>
</tr>
    <td></td>
    <td></td>
    <td></td>
    <td>grog-eth0</td>
    <td>fgriffon1-ipmi</td>
    <td> - </td>
    <td>sgraphene2-ipmi</td>
    <td>sgraphene4-ipmi</td>
    <td></td>
    <td>sgriffon1-ipmi</td>
    <td> - </td>
    <td> - </td>
</table>
 
=== Links color ===
==== Networks cables ====
* <span style="color: red;">##</span> Red : Production Network
* <span style="color: green;">##</span> Green : Managment Network
* <span style="color: blue;">##</span> Blue and White : Interco Managment
* <span style="color: black;">##</span> Black : Server admin Network
==== Table color ====
* <span style="color: yellow;">##</span> Yellow : LACP etherchannel link for fgriffon1 (trk3) (2x1Gbps)
 
== HPC Networks ==
 
=== Omni-Path 100G on grele and grimani nodes  ===


*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
Line 112: Line 15:
* Card Model :  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
* Card Model :  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8


=== Infiniband 20G on griffon nodes  ===
== Infiniband 20G on griffon nodes  ==
''Infiniband has been removed from these nodes''
''Infiniband has been removed from these nodes''


=== Infiniband 20G on graphene nodes  ===
== Infiniband 20G on graphene nodes  ==




Line 125: Line 28:
* IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] )
* IP over IB addressing : <code class="host">graphene-[1..144]-ib0</code>.nancy.grid5000.fr ( 172.18.64.[1..144] )


==== Switch ====
=== Switch ===


* Infiniband Switch 4X DDR
* Infiniband Switch 4X DDR
Line 132: Line 35:
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203
* 12 line cards 4X 12 ports DDR Flextronics F-X43M203


==== Interconnection ====
=== Interconnection ===


Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.


=== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ===
== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ==
 


*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
Line 152: Line 54:
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )
* IP over IB addressing : <code class="host">grisou-[50-51]-ib0</code>.nancy.grid5000.fr ( 172.16.72.[50-51] )


==== Switch ====
=== Switch ===


* 36-port Mellanox InfiniBand SX6036
* 36-port Mellanox InfiniBand SX6036
Line 160: Line 62:
* FDR/FDR10 support for Forward Error Correction (FEC)  
* FDR/FDR10 support for Forward Error Correction (FEC)  


==== Interconnection ====
=== Interconnection ===


Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.

Revision as of 08:39, 20 May 2018

Overview of Ethernet network topology

NancyNetwork.png

HPC Networks

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model : Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Infiniband 20G on griffon nodes

Infiniband has been removed from these nodes

Infiniband 20G on graphene nodes

  • graphene-1 to graphene-144 have one 20GB Infiniband card.
  • Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( ConnectX ).
  • Driver : mlx4_ib
  • OAR property : ib_rate=20
  • IP over IB addressing : graphene-[1..144]-ib0.nancy.grid5000.fr ( 172.18.64.[1..144] )

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.