Special Features

From Grid5000
Jump to: navigation, search


This page lists features, special hardware or software that is known to be available in Grid'5000 or on the network of Grid'5000 sites, but that is not maintained by the support-staff.

Energy monitoring

See Power Measurement for more detailed information.

Network probes

High performance networks probes where explored on a few sites. A machine receives a mirror of the outgoing 10G link in Rennes and Lille, and Lyon has dedicated hardware. Rennes and Lyon also have a machine with 2 10G network cards for traffic shaping. Best contact for this is Matthieu Imbert for Lyon, and Pascal Morillon for Rennes.

Storage

Irods is deployed in Grenoble, and you can request access to it to Bruno Bzeznik or look at the documentation available on the dedicated page

Nodes with multiple cabled Ethernet interfaces

Grid'5000 provides some clusters with nodes with more than one single cabled Ethernet interface.

List of the clusters

Griffon (Nancy)

Griffon has nodes with a second and third cabled network interfaces: node 11 to 14 (griffon-[11-14].nancy.grid5000.fr)

Those interfaces shows as eth1 and eth2 (griffon-X-eth1.nancy.grid5000.fr, griffon-X-eth2.nancy.grid5000.fr)

Graphite (Nancy)

Graphite has nodes with a second and a third cabled network interfaces (graphite-[1-4].nancy.grid5000.fr)

Those interfaces shows as eth2 and eth3 (graphite-X-eth2.nancy.grid5000.fr, graphite-X-eth3.nancy.grid5000.fr)

Please also note that the Intel Xeon Phi (MIC) which is installed on those nodes shows up as a mic0 Ethernet interface.

Paranoia (Rennes)

Paranoia has a second network interface of 1Gbps, on eth2. See the Rennes:Hardware and Rennes:Network#Network pages.

Paravance (Rennes)

Paravance has a second network interface of 10Gbps, on eth1. See the Rennes:Hardware and Rennes:Network#Network pages.

Parasilo (Rennes)

Parasilo has a second network interface of 10Gbps, on eth1. See the Rennes:Hardware and Rennes:Network#Network pages.

Example of usage of the extra network interfaces

We show here how to reserve and configure multiple Ethernet network interfaces.

First we reserve a deploy job:

Terminal.png frontend:
oarsub -I -t deploy -p "eth_count > 1 and cluster = 'cluster-name' " -l nodes=nb_node,walltime=02:00:00

Then we deploy the stretch environment:

Terminal.png frontend:
kadeploy3 -f $OAR_NODEFILE -k -e debian9-x64-nfs


See cluster section to know which Ethernet interfaces (ethX) can be used. For exemple, on griffon, eth1 and eth2 are cabled. Use I=1 and J=2 for griffon.

Get node name with interfaces:

Terminal.png frontend:
uniq $OAR_FILE_NODES | sed -e 's/\([^\.]*\)\(.*\)/\1-ethI\2/' > nodes_second_int
Terminal.png frontend:
uniq $OAR_FILE_NODES | sed -e 's/\([^\.]*\)\(.*\)/\1-ethJ\2/' > nodes_third_int

Reserve two Vlan:

Terminal.png frontend:
oarsub -I -t deploy -l {"type='kavlan'"}/vlan=2,walltime=03:00:00

Show vlan number:

Terminal.png frontend:
kavlan -V

Put interfaces on the two different vlan:

Terminal.png frontend:
kavlan -i vlan1 -s -f nodes_second_int
Terminal.png frontend:
kavlan -i vlan2 -s -f nodes_third_int

Get ip on second and third interface :

Terminal.png frontend:
taktuk -d -1 -l root -f `uniq $OAR_NODEFILE` broadcast exec [ 'dhclient ethI' ]
Terminal.png frontend:
taktuk -d -1 -l root -f `uniq $OAR_NODEFILE` broadcast exec [ 'dhclient ethJ' ]

At this moment your node should have 4 IP:

Terminal.png node:
ip a
root@griffon-11:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
   inet6 ::1/128 scope host 
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
   link/ether 00:e0:81:b2:c2:4c brd ff:ff:ff:ff:ff:ff
   inet 172.16.65.11/20 brd 172.16.79.255 scope global eth0
   inet6 fe80::2e0:81ff:feb2:c24c/64 scope link 
      valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
   link/ether 00:e0:81:b2:c2:4d brd ff:ff:ff:ff:ff:ff
   inet 10.16.7.11/18 brd 10.16.63.255 scope global eth1
   inet6 fe80::2e0:81ff:feb2:c24d/64 scope link 
      valid_lft forever preferred_lft forever
4: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc pfifo_fast state UP qlen 256
   link/infiniband 80:00:00:48:fe:80:00:00:00:00:00:00:00:02:c9:03:00:02:80:d9 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
   inet 172.18.65.11/20 brd 172.18.79.255 scope global ib0
   inet6 fe80::202:c903:2:80d9/64 scope link 
      valid_lft forever preferred_lft forever
5: ib1: <BROADCAST,MULTICAST> mtu 65520 qdisc noop state DOWN qlen 256
   link/infiniband 80:00:00:49:fe:80:00:00:00:00:00:00:00:02:c9:03:00:02:80:da brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
6: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
   link/ether 00:60:dd:46:84:b7 brd ff:ff:ff:ff:ff:ff
   inet 10.16.72.11/18 brd 10.16.127.255 scope global eth2
   inet6 fe80::260:ddff:fe46:84b7/64 scope link 
      valid_lft forever preferred_lft forever
Note.png Note

Please note that ib0 and ib1 here (an on any cluster with an Infiniband network) are the IP over Infiniband interfaces, which cannot be configured like regular Ethernet interfaces.

For more information, please look at the KaVLAN pages.