Modules

From Grid5000
Revision as of 12:49, 9 July 2020 by Pneyron (talk | contribs)
Jump to navigation Jump to search
Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Grid'5000 provides a set of software (mainly scientific-related) using Environment modules, thanks to the module command line tool.

They are available from Grid5000 frontends or cluster's nodes (only on standard, big, and nfs environment if deployment is used).

The modules command

General usage

The module system is designed to load software and make it available by modifying environment (such as your PATH variable).

To get started, list available software using:

$ module avail
--------------------------------------------- /grid5000/spack/share/spack/modules/linux-debian9-x86_64 ---------------------------------------------
autoconf/2.69_gcc-6.4.0                        gcc/8.3.0_gcc-6.4.0                            magma/2.3.0_gcc-6.4.0
automake/1.16.1_gcc-6.4.0                      gmp/6.1.2_gcc-6.4.0                            memkind/1.7.0_gcc-6.4.0
boost/1.69.0_gcc-6.4.0                         hwloc/1.11.11_gcc-6.4.0                        miniconda2/4.5.11_gcc-6.4.0
cmake/3.13.4_gcc-6.4.0                         hwloc/2.0.2_gcc-6.4.0                          miniconda3/4.5.11_gcc-6.4.0
cuda/10.0.130_gcc-6.4.0(default)               intel-mkl/2017.4.239_gcc-6.4.0                 mpfr/4.0.1_gcc-6.4.0
cuda/7.5.18_gcc-6.4.0                          intel-mkl/2018.1.163_gcc-6.4.0                 netlib-lapack/3.8.0_gcc-6.4.0
cuda/8.0.61_gcc-6.4.0                          intel-mkl/2019.1.144_gcc-6.4.0                 netlib-xblas/1.0.248_gcc-6.4.0
cuda/9.0.176_gcc-6.4.0                         intel-mpi/2019.1.144_gcc-6.4.0                 numactl/2.0.12_gcc-6.4.0
cuda/9.1.85_gcc-6.4.0                          intel-parallel-studio/cluster.2019.2_gcc-6.4.0 openblas/0.3.5_gcc-6.4.0
cuda/9.2.88_gcc-6.4.0                          intel-tbb/2019.2_gcc-6.4.0                     openmpi/3.1.3_gcc-6.4.0
cudnn/5.1_gcc-6.4.0                            isl/0.19_gcc-6.4.0                             openmpi/4.0.1_gcc-6.4.0
cudnn/6.0_gcc-6.4.0                            jdk/11.0.2_9_gcc-6.4.0                         papi/5.6.0_gcc-6.4.0
cudnn/7.3_gcc-6.4.0                            libfabric/1.7.1_gcc-6.4.0                      swig/3.0.12_gcc-6.4.0
gcc/6.4.0_gcc-6.4.0                            libfabric/1.8.0_gcc-6.4.0                      tar/1.31_gcc-6.4.0
gcc/6.5.0_gcc-6.4.0                            likwid/4.3.2_gcc-6.4.0
gcc/7.4.0_gcc-6.4.0                            llvm/7.0.1_gcc-6.4.0
Note.png Note

If you need additional software to be installed, feel free to contact Grid5000 support team and we can look into it.

To load something into your environment use the load command:

$ module load gcc
$ gcc --version
gcc (GCC) 8.3.0

By default, module loads the latest version available of a software (sorted by lexicographical order). You can also specify the version you need:

$ module load gcc/7.4.0_gcc-6.4.0
$ gcc --version
gcc (GCC) 7.4.0

You can also find out more information about the package using whatis or show commands:

$ module whatis gcc
$ module show gcc

If you want to unload one or all the currently loaded modules, you can use:

$ module unload gcc
$ module purge

The full documentation of module command is available at: http://modules.sourceforge.net/

Using modules in jobs

The module command is not a real executable, but a shell function.

If it is not available from your shell (it might be the case if you use zsh), ensure that /etc/profile.d/modules.sh is sourced.

In addition, module must be executed in an actual shell to work:

oarsub 'module load gcc' will fail, you must use oarsub 'bash -l -c "module load gcc"')

Packages that require connection to license server

Some packages available with module are licensed and require a connection to a license server. If your institution provide a license server for this software, you have to forward the connection to the license server from the node where the software will be used.

For instance, to use Intel compilers from an Inria institute network (or when using Inria's VPN), you can use jetons.inria.fr license server by forwarding connections to ports 29030 and 34430 from your node using an SSH tunnel. Intel compilers must be configured to use localhost:29030 as the license server for connections to be forwarded in the tunnel. For instance, use the following commands:

# Assuming that your connecting from a network where jetons.inria.fr is available 
laptop: ssh -R 29030:jetons.inria.fr:29030 -R 34430:jetons.inria.fr:34430 <your_node>.g5k
node: module load intel-parallel-studio
node: export INTEL_LICENSE_FILE=29030@127.0.0.1
node: icc -v