Nancy:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
Line 103: Line 103:
* <span style="color: yellow;">##</span> Yellow : LACP etherchannel link for fgriffon1 (trk3) (2x1Gbps)
* <span style="color: yellow;">##</span> Yellow : LACP etherchannel link for fgriffon1 (trk3) (2x1Gbps)


== High Performance Networks ==
== HPC Networks ==


=== Omni-Path 100G on grele nodes  ===
=== Omni-Path 100G on grele and grimani nodes  ===
==== Nodes ====


*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 100GB Omni-Path card.
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 100GB Omni-Path card.


* Card Model :  Intel Omni-Path Host Fabric adaptateur séries 100 1 Port PCIe x8
* Card Model :  Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8
* Driver : <code class="dir">?</code>
 
=== Omni-Path 100G on grimani nodes  ===
==== Nodes ====
 
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 10GB Ethernet card.
 
* Card Model :  Intel Omni-Path Host Fabric adaptateur séries 100 1 Port PCIe x8
* Driver : <code class="dir">?</code>
 
 
=== Ethernet 10G on grele nodes  ===
==== Nodes ====
 
*<code class="host">grele-1</code> to <code class="host">grele-14</code> have one 10GB Ethernet card.
 
* Card Model :  Intel X520 DP 10Gbit/s connexion directe/SFP+ + I350 DP 1Gbit/s.
* Driver : <code class="dir">ixgbe</code>
 
 
=== Ethernet 10G on graphite nodes  ===
 
==== Nodes ====
 
*<code class="host">graphite-1</code> to <code class="host">graphite-4</code> have one 10GB Ethernet card.
 
* Card Model :  Intel X520 DP 10Gbit/s connexion directe/SFP+ + I350 DP 1Gbit/s.
* Driver : <code class="dir">ixgbe</code>
 
=== Ethernet 10G on grimani nodes  ===
 
==== Nodes ====
 
*<code class="host">grimani-1</code> to <code class="host">grimani-6</code> have one 10GB Ethernet card.
 
* Card Model :  Intel X520, Double port, 10Gb, 1GbE, DA/SFP+, I350 DP
* Driver : <code class="dir">ixgbe</code>


=== Infiniband 20G on griffon nodes  ===
=== Infiniband 20G on griffon nodes  ===
''Infiniband has been removed from this nodes''
''Infiniband has been removed from these nodes''


=== Infiniband 20G on graphene nodes  ===
=== Infiniband 20G on graphene nodes  ===


==== Nodes ====


*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
*<code class="host">graphene-1</code> to <code class="host">graphene-144</code> have one 20GB Infiniband card.
Line 173: Line 135:


Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.
=== Ethernet 10G on griffon nodes ===
==== Nodes ====
* Myrinet interface has been disconnected from this nodes
*<code class="host">griffon-11</code> and <code class="host">griffon-14</code> have one 10GB Myricom card
* Card Model : Myri-10G ( [http://www.myri.com/Myri-10G/10gbe_solutions.html] ) 10G-PCIE-8B-C NIC
* Driver : <code class="dir">myri10ge</code>
* ''Myrinet interface is no more used as an 10G Ethernet adapter. See {{Bug|6490}} for more information.''
=== Ethernet 10G on grisou nodes ===
==== Nodes ====
*<code class="host">grisou-1</code> to <code class="host">grisou-51</code> have 4 10GB SFP+ interfaces
* Card Model :
** 82599ES 10-Gigabit SFI/SFP+ Network Connection
** Ethernet 10G 2P X520 Adapter
* Driver : <code class="dir">ixgbe</code>
* OAR property : eth_count=4 + eth_count=5 ( [https://helpdesk.grid5000.fr/oar/Nancy/monika.cgi?props=ethnb%3D4&Action=Display+nodes+for+these+properties&.cgifields=props Monika] ).
=== Ethernet 1G on grisou nodes ===
==== Nodes ====
*<code class="host">grisou-1</code> to <code class="host">grisou-48</code> have 1 1GB SFP+ interfaces
* Card Model :
** 82599ES 1-Gigabit SFI/SFP+ Network Connection
** I350 Gigabit Network Connection
* Driver : <code class="dir">igb</code>
* OAR property : eth_count=5 ( [https://helpdesk.grid5000.fr/oar/Nancy/monika.cgi?props=ethnb%3D4&Action=Display+nodes+for+these+properties&.cgifields=props Monika] ).


=== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ===
=== Infiniband 56G on graphite/graoully/grimoire/grisou nodes  ===


==== Nodes ====


*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.
*<code class="host">graoully-[1-16]</code> have one 56GB Infiniband card.

Revision as of 08:36, 20 May 2018

Overview of network topology

NancyNetwork.png

IP networks in use

You have to use a public network range to run an experiment between several Grid5000 sites.

Public Networks

  • computing : 172.16.64.0/20
  • ib/mx : 172.18.64.0/20
  • virtual : 10.144.0.0/14

For Infinband (ib) see High Performance Networks

Local Networks

  • admin : 172.17.64.0/20
  • nat : 192.168.69.0/30

Network

Production Network

Room B056

Production network

Room C105A - Talc (retired)

Production network Talc

Physical link details

The following diagram show a view of the central router, a hp procurve 5406zl, named sgravillon1.

  • All internal links are 10GB CX4.
Sgravillon1-graphene.png


A1 A3 A5 A7 A9 A11 A13 A15 A17 A19 A21 A23
sgrapheneib TALC-adm - fgriffon2-ipmi grog-eth1 fgriffon1-eth0 sgraphene1-ipmi sgraphene3-ipmi - - - fgriffon1-eth1
A2 A4 A6 A8 A10 A12 A14 A16 A18 A20 A22 A24
grog-eth0 fgriffon1-ipmi - sgraphene2-ipmi sgraphene4-ipmi sgriffon1-ipmi - -

Links color

Networks cables

  • ## Red : Production Network
  • ## Green : Managment Network
  • ## Blue and White : Interco Managment
  • ## Black : Server admin Network

Table color

  • ## Yellow : LACP etherchannel link for fgriffon1 (trk3) (2x1Gbps)

HPC Networks

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model : Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Infiniband 20G on griffon nodes

Infiniband has been removed from these nodes

Infiniband 20G on graphene nodes

  • graphene-1 to graphene-144 have one 20GB Infiniband card.
  • Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( ConnectX ).
  • Driver : mlx4_ib
  • OAR property : ib_rate=20
  • IP over IB addressing : graphene-[1..144]-ib0.nancy.grid5000.fr ( 172.18.64.[1..144] )

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.