From Grid5000
Revision as of 11:36, 9 November 2016 by Bpichot (talk | contribs)

Jump to: navigation, search
Note.png Note

For a more upto-date list of Gotchas, see

This page documents various gotchas (counter-intuitive features of Grid'5000) that could affect users' experiments in surprising ways.


Topology of ethernet networks

Most (large) clusters have a hierarchical ethernet topology, because ethernet switchs with a large number of ports are too expensive. A good example of such a hierarchical topology is the Rennes:Network for the paravance and parasilo clusters, where nodes are connected to 3 different switches. When doing experiments using the ethernet network intensively, it is a good idea to request nodes on the same switch, using e.g oarsub -l switch=1/nodes=5, or to request nodes connected to specific switch using e.g oarsub -p "switch='cisco2'" -l nodes=5.

Performance of ethernet networks

The backplane bandwidth of ethernet switches doesn't usually allow full-speed communications between all the ports of the switch.

High-performance networks

The topology of Infiniband and Myrinet networks is generally less surprising, and many of them are non-blocking (the switch can handle the total bandwidth of all ports simultaneously). However, there are some exceptions :

  • the Infiniband network in Grenoble is hierarchical (see Grenoble:Network).
  • in nancy, graphene-144 is connected to the griffon infiniband switch. This was required in order to free a port on the graphene switch, used to connect the two infiniband switchs together. This can impact the performance of your application if you are using all 144 graphene nodes.

Compute nodes

All Grid'5000 clusters are supposed to contain homogeneous (identical) sets of nodes, but there are some exceptions.

Hard disks

Due to their high failure rate, hard disks tend to get replaced frequently, and it is not always possible to keep the same model during the whole life of a cluster. If this is important to you, please check exact disk model using the reference API, as storage is described in detail for each node.

Different disks in the Grenoble edel cluster

Disks of the Grenoble edel clusters come in two different size : 128 and 64 GB. Disks size is limited to 64 GB on all nodes. To use additional space, you can refer to Advanced_Kadeploy#Use_disk.28s.29_as_I_want.

  • 45 disks of 128GB : 1, 5-6, 8-9, 12, 15-16, 19, 21, 23-25, 27-29, 32, 35, 37, 39-43, 45-46, 48-50, 52, 55-57, 59, 61-63, 65-72
  • 26 disks of 64GB : 2-4, 7, 10-11, 13-14, 17-18, 20, 22, 26, 30-31, 33-34, 36, 38, 44, 47, 51, 53, 58, 60, 64

Some Lyon's sagittaire nodes have different hardware than others

Sagittaire nodes 70 to 79 are not identical to others: They have two disk (2x73 GB) and 16 GiB of RAM.


  • The standard environment (the one users get when not deploying) on all compute nodes is identical, with the exception of additional drivers and software to support GPUs, and Myrinet or Infiniband networks on sites where they are available.
  • The user frontend are identical on all sites.
  • The reference environments ({lenny,squeeze,wheezy}-x64-{min,base,nfs,xen,big}) are identical on all sites.