Nancy:Network: Difference between revisions

From Grid5000
Jump to navigation Jump to search
Line 3: Line 3:
{{Portal|User}}
{{Portal|User}}
{{Maintainer|Clément Parisot}}
{{Maintainer|Clément Parisot}}
=Overview of network topology=
=Overview of Ethernet network topology=
[[File:NancyNetwork.png|900px]]
[[File:NancyNetwork.png|900px]]



Revision as of 08:37, 20 May 2018

Overview of Ethernet network topology

NancyNetwork.png

IP networks in use

You have to use a public network range to run an experiment between several Grid5000 sites.

Public Networks

  • computing : 172.16.64.0/20
  • ib/mx : 172.18.64.0/20
  • virtual : 10.144.0.0/14

For Infinband (ib) see High Performance Networks

Local Networks

  • admin : 172.17.64.0/20
  • nat : 192.168.69.0/30

Network

Production Network

Room B056

Production network

Room C105A - Talc (retired)

Production network Talc

Physical link details

The following diagram show a view of the central router, a hp procurve 5406zl, named sgravillon1.

  • All internal links are 10GB CX4.
Sgravillon1-graphene.png


A1 A3 A5 A7 A9 A11 A13 A15 A17 A19 A21 A23
sgrapheneib TALC-adm - fgriffon2-ipmi grog-eth1 fgriffon1-eth0 sgraphene1-ipmi sgraphene3-ipmi - - - fgriffon1-eth1
A2 A4 A6 A8 A10 A12 A14 A16 A18 A20 A22 A24
grog-eth0 fgriffon1-ipmi - sgraphene2-ipmi sgraphene4-ipmi sgriffon1-ipmi - -

Links color

Networks cables

  • ## Red : Production Network
  • ## Green : Managment Network
  • ## Blue and White : Interco Managment
  • ## Black : Server admin Network

Table color

  • ## Yellow : LACP etherchannel link for fgriffon1 (trk3) (2x1Gbps)

HPC Networks

Omni-Path 100G on grele and grimani nodes

  • grele-1 to grele-14 have one 100GB Omni-Path card.
  • grimani-1 to grimani-6 have one 100GB Omni-Path card.
  • Card Model : Intel Omni-Path Host Fabric adaptateur series 100 1 Port PCIe x8

Infiniband 20G on griffon nodes

Infiniband has been removed from these nodes

Infiniband 20G on graphene nodes

  • graphene-1 to graphene-144 have one 20GB Infiniband card.
  • Card Model : Mellanox Technologies MT26418 [ConnectX IB DDR, PCIe 2.0 5GT/s] ( ConnectX ).
  • Driver : mlx4_ib
  • OAR property : ib_rate=20
  • IP over IB addressing : graphene-[1..144]-ib0.nancy.grid5000.fr ( 172.18.64.[1..144] )

Switch

  • Infiniband Switch 4X DDR
  • Model based on Infiniscale_III
  • 1 commutation card Flextronics F-X43M204
  • 12 line cards 4X 12 ports DDR Flextronics F-X43M203

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.

Infiniband 56G on graphite/graoully/grimoire/grisou nodes

  • graoully-[1-16] have one 56GB Infiniband card.
  • grimoire-[1-8] have one 56GB Infiniband card.
  • graphite-[1-4] have one 56GB Infiniband card.
  • grisou-[50-51] have one 56GB Infiniband card.
  • Card Model : Mellanox Technologies MT27500 Family [ConnectX-3] ( ConnectX-3 ).
  • Driver : mlx4_core
  • OAR property : ib_rate='56'
  • IP over IB addressing : graoully-[1-16]-ib0.nancy.grid5000.fr ( 172.18.70.[1-16] )
  • IP over IB addressing : grimoire-[1-8]-ib0.nancy.grid5000.fr ( 172.18.71.[1-8] )
  • IP over IB addressing : graphite-[1-4]-ib0.nancy.grid5000.fr ( 172.16.68.[9-12] )
  • IP over IB addressing : grisou-[50-51]-ib0.nancy.grid5000.fr ( 172.16.72.[50-51] )

Switch

  • 36-port Mellanox InfiniBand SX6036
  • Documentation
  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)

Interconnection

Infiniband network is physically isolated from Ethernet networks. Therefore, Ethernet network emulated over Infiniband is isolated as well. There isn't any interconnexion, neither at the data link layer nor at the network layer.