Singularity: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
Line 11: Line 11:
(full path to /grid5000/code/bin/singularity is required for non interactive job)
(full path to /grid5000/code/bin/singularity is required for non interactive job)


Singularity users documentation is available at https://sylabs.io/guides/3.5/user-guide/index.html. It describes various ways to run programs inside a container and how to build your own image (which requires to be root, but can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k").


== Running MPI programs under Singularity containers ==
== Running MPI programs under Singularity containers ==
Line 23: Line 24:


   oarsub -l nodes=2 "mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest"
   oarsub -l nodes=2 "mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest"


== Using GPU under Singularity containers ==
== Using GPU under Singularity containers ==

Revision as of 15:29, 21 April 2020

Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance network under containers and is compatible with docker images. More info at: https://sylabs.io/docs/

Grid'5000 supports Singularity container platform: It is available in standard environment and does not requires to be root.

Basic usage

Just run the "singularity" command to use it. It can also be run under an OAR submitted job, for instance:

 oarsub -l core=1 "/grid5000/code/bin/singularity run library://sylabsed/examples/lolcow"

(full path to /grid5000/code/bin/singularity is required for non interactive job)

Singularity users documentation is available at https://sylabs.io/guides/3.5/user-guide/index.html. It describes various ways to run programs inside a container and how to build your own image (which requires to be root, but can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k").

Running MPI programs under Singularity containers

MPI programs may be run under Singularity containers, by relying on both MPI implementation available on the host, i.e. Grid'5000 physical nodes (which provides access to high peformance network hardware if present), and MPI library that must be installed inside the container.

MPI programs under Singularity can then be started by the host's mpirun.

See https://sylabs.io/guides/3.5/user-guide/mpi.html for more information.

For instance, to submit such MPI job under OAR, use:

 oarsub -l nodes=2 "mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest"

Using GPU under Singularity containers

GPU available on the host can be made available inside the container by using the '--nv' option (for NVidia GPUs).

For instance, to start an interactive tensorflow environment , use:

 frontend: oarsub -I -l gpu=1
 node: singularity run --nv docker://tensorflow/tensorflow:latest-gpu

More info at: https://sylabs.io/guides/3.5/user-guide/gpu.html