Singularity
![]() |
Note |
---|---|
This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team. |
Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks in containers and is compatible with docker images. Grid'5000 supports the Singularity containers. It is available using module and does not requires root privileges. More info at: https://sylabs.io/docs/.
Basic usage
Load singularity module :
Just run the singularity
command to use it :
The Singularity user documentation is available at https://sylabs.io/guides/latest/user-guide. It describes the various ways to run programs inside a container and how to build your own container image.
Building a singularity image
Building a container image requires to be root. It can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k" :
For more information about building Singularity containers, see https://docs.sylabs.io/guides/latest/user-guide/build_a_container.html
Using docker containers with Singularity
Singularity can also be used to start docker containers. For instance:
Running singularity containers in OAR submission
Singularity containers can also be run in a OAR submission (none-interactive batch job). For instance:
![]() |
frontend :
|
oarsub -l core=1 "module load singularity && singularity run library://sylabsed/examples/lolcow " |
Running MPI programs in Singularity containers
MPI programs may be run in Singularity containers, by leveraging both the MPI implementation available in the host, i.e. a Grid'5000 physical node (which has a direct access to the high peformance network hardware if present), and the MPI library that must be installed inside the container.
MPI programs in the Singularity container can then be started using the the mpirun command on the host.
See https://sylabs.io/guides/latest/user-guide/mpi.html for more information.
For instance, to submit such a MPI job under OAR, assuming Singularity image named my_mpi_image.sif
in your home directory, use:
![]() |
frontend :
|
oarsub -l nodes=2 "module load singularity && mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- `which singularity` exec my_mpi_image.sif /opt/mpitest " |
Using GPUs in Singularity containers
GPUs available in the host can be made available inside the container by using the --nv option (for Nvidia GPUs only).
For instance, to start an interactive tensorflow environment with one GPU, first submit the job reserving 1 GPU:
Then on that node:
More info at: https://sylabs.io/guides/latest/user-guide/gpu.html
Using Apptainer (instead of Singularity)
As Sylabs forked the Singularity project
without renaming their fork, the Singularity project decided to move into the Linux Foundation and rename their project Apptainer
. See the official announcement for more information.
As Singularity, Apptainer is available through module. To use it, just load the module and execute your container: