Singularity: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
 
(50 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks under containers and is compatible with docker images. More info at: https://sylabs.io/docs/
{{Portal|User}}
{{Portal|Tutorial}}
{{Pages|HPC}}
{{TutorialHeader}}


Grid'5000 supports Singularity container platform: It is available in the standard environment and does not requires to be root.
Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks in containers and is compatible with docker images. Grid'5000 supports the Singularity containers. It is available [[Modules|using module]] and does not requires root privileges. More info at: https://sylabs.io/docs/.


== Basic usage ==
== Basic usage ==


Just run the "singularity" command to use it. It can also be run under an OAR submitted job, for instance:
Load singularity module :


  oarsub -l core=1 "/grid5000/code/bin/singularity run library://sylabsed/examples/lolcow"
{{Term|location=node| cmd=<code class="command">module</code> load singularity}}


(full path to /grid5000/code/bin/singularity is required for non-interactive job)
Just run the <code class="command">singularity</code> command to use it :


Singularity user documentation is available at https://sylabs.io/guides/3.5/user-guide/index.html. It describes various ways to run programs inside a container and how to build your own container image (which requires to be root, but can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k").
{{Term|location=node| cmd=<code class="command">singularity</code> run <code class="replace">library://sylabsed/examples/lolcow</code>}}


== Running MPI programs under Singularity containers ==
The Singularity user documentation is available at https://sylabs.io/guides/latest/user-guide. It describes the various ways to run programs inside a container and how to build your own container image.


MPI programs may be run under Singularity containers, by relying on both MPI implementation available on the host, i.e. Grid'5000 physical nodes (which provides access to high peformance network hardware if present), and MPI library that must be installed inside the container.
=== Building a singularity image ===


MPI programs under Singularity can then be started by the host's mpirun.
Recent versions of Singularity allow building images without root access (see https://docs.sylabs.io/guides/latest/user-guide/fakeroot.html). However this has limitations, so it is better to build images as root. It can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k" :


See https://sylabs.io/guides/3.5/user-guide/mpi.html for more information.
{{Term|location=node| cmd=<code class="command">module</code> load singularity && sudo-g5k $(which singularity) build mpi.sif mpi.def}}


For instance, to submit such MPI job under OAR, use:
For more information about building Singularity containers, see https://docs.sylabs.io/guides/latest/user-guide/build_a_container.html


  oarsub -l nodes=2 "mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest"
== Using docker containers with Singularity ==


== Using GPU under Singularity containers ==
Singularity can also be used to start docker containers. For instance:


GPU available on the host can be made available inside the container by using the '--nv' option (for NVidia GPUs).
{{Term|location=node| cmd=<code class="command">singularity</code> run <code class="replace">docker://debian</code>}}


For instance, to start an interactive tensorflow environment with one GPU, use:
== Running singularity containers in OAR submission ==


  frontend: oarsub -I -l gpu=1
Singularity containers can also be run in a OAR submission (none-interactive batch job). For instance:


  node: singularity run --nv docker://tensorflow/tensorflow:latest-gpu
{{Term|location=frontend| cmd=<code class="command">oarsub</code> -l core=1 "<code class="replace">module load singularity && singularity run library://sylabsed/examples/lolcow</code>"}}


More info at: https://sylabs.io/guides/3.5/user-guide/gpu.html
== Running MPI programs in Singularity containers ==


== Using docker containers with Singularity ==
MPI programs may be run in Singularity containers, by leveraging both the MPI implementation available in the host, i.e. a Grid'5000 physical node (which has a direct access to the high peformance network hardware if present), and the MPI library that must be installed inside the container.
 
MPI programs in the Singularity container can then be started using the the mpirun command on the host.
 
See https://sylabs.io/guides/latest/user-guide/mpi.html for more information.
 
For instance, to submit such a MPI job under OAR, assuming Singularity image named <code class="replace">my_mpi_image.sif</code> in your home directory, use:
 
{{Term|location=frontend| cmd=<code class="command">oarsub</code> -l nodes=2 "<code class="replace">module load singularity && mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- `which singularity` exec my_mpi_image.sif /opt/mpitest</code>"}}
 
== Using GPUs in Singularity containers ==
 
GPUs available in the host can be made available inside the container by using the '''--nv''' option (for Nvidia GPUs only).
 
For instance, to start an interactive tensorflow environment with one GPU, first submit the job reserving 1 GPU:
 
{{Term|location=frontend| cmd=<code class="command">oarsub</code> -I <code class="replace">-l gpu=1</code>}}
 
{{Note|text=You may need to add "-q production" or "-t exotic" depending which GPU cluster you want to use}}
 
Then on that node:
 
{{Term|location=node| cmd=<code class="command">module</code> load singularity}}
{{Term|location=node| cmd=<code class="command">singularity</code> run <code class="replace">--nv docker://tensorflow/tensorflow:latest-gpu</code>}}
 
More info at: https://sylabs.io/guides/latest/user-guide/gpu.html
 
== Using Apptainer (instead of Singularity) ==
 
As Sylabs forked the <code class="command">Singularity project</code> without renaming their fork, the Singularity project decided to move into the Linux Foundation and rename their project <code class="command">Apptainer</code>. See [https://apptainer.org/news/community-announcement-20211130/ the official announcement] for more information.
 
As Singularity, Apptainer is available through module. To use it, just load the module and execute your container:
 
{{Term|location=node| cmd=<code class="command">module</code> load apptainer}}
{{Term|location=node| cmd=<code class="command">apptainer</code> run <code class="replace">docker://alpine</code>}}
 
== Example: Using Singularity to port a software environment between HPC infrastructures ==
 
Using Singularity is a good way to port software environments between HPC infrastructures, for example, between Grid'5000 and [http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html IDRIS' Jean Zay].
 
The following example describes how Singularity could be used together with [[Conda]] to share your software environment on two different HPC infrastructures (it is inspired from this [https://stackoverflow.com/questions/54678805/containerize-a-conda-environment-in-a-singularity-container stackoverflow question]).
 
; Step 1 - On Grid'5000, create a Docker container with your Conda environment
 
(based on the [https://micromamba-docker.readthedocs.io/en/latest/quick_start.html micromamba Quick Start guide])
 
Create a env.yaml file to describe your Conda environment:
 
name: base
channels:
  - conda-forge
dependencies:
  - tensorflow-gpu
 
Create a Dockerfile:
 
FROM mambaorg/micromamba:latest
COPY --chown=$MAMBA_USER:$MAMBA_USER env.yaml /tmp/env.yaml
RUN micromamba install -y -n base -f /tmp/env.yaml && \
    micromamba clean --all --yes
 
Create a Docker image using this environment: (see [[Docker]])
{{Term|location=node| cmd=<code class="command">g5k-setup-docker</code>}}
 
{{Term|location=node| cmd=<code class="command">docker</code> build --tag my_app .}}
 
Check your docker image:
{{Term|location=node| cmd=<code class="command">docker</code> run -it --rm my_app python3 -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"}}
 
; Step 2 - On Grid'5000, convert your Docker image to a Singularity image
Export your Docker image:
{{Term|location=node| cmd=<code class="command">docker</code> save -o my_app.tar my_app}}
Convert it to a Singularity image:
{{Term|location=node| cmd=<code class="command">singularity</code> build my_app.sif docker-archive://my_app.tar}}
Test your Singularity image:
$ singularity shell --nv my_app.sif
Singularity> eval "$(micromamba shell hook --shell bash)"
Singularity> micromamba activate
(base) Singularity> python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
Num GPUs Available: 2
(base) Singularity>
 
; Step 3 - Copy your Singularity image to Jean Zay (using scp/rsync) and run it there
 
Note that unless you added Grid'5000's external addresses to your IDRIS account as described in the [[FAQ]], you must copy your image locally and then copy it to Jean Zay.


Singularity can also be used to start docker containers. For example:
Specific information about running Singularity containers on Jean Zay is available at http://www.idris.fr/eng/jean-zay/cpu/jean-zay-utilisation-singularity-eng.html.


  singularity shell docker://gentoo/stage3-amd64
$ module load singularity
$ idrcontmgr cp my_app.sif
1 file copied.
$ singularity shell --nv $SINGULARITY_ALLOWED_DIR/my_app.sif
Singularity> eval "$(micromamba shell hook --shell bash)"
Singularity> micromamba activate
(base) Singularity> python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
Num GPUs Available: 1
(base) Singularity>

Latest revision as of 10:52, 19 July 2023

Note.png Note

This page is actively maintained by the Grid'5000 team. If you encounter problems, please report them (see the Support page). Additionally, as it is a wiki page, you are free to make minor corrections yourself if needed. If you would like to suggest a more fundamental change, please contact the Grid'5000 team.

Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks in containers and is compatible with docker images. Grid'5000 supports the Singularity containers. It is available using module and does not requires root privileges. More info at: https://sylabs.io/docs/.

Basic usage

Load singularity module :

Terminal.png node:
module load singularity

Just run the singularity command to use it :

Terminal.png node:
singularity run library://sylabsed/examples/lolcow

The Singularity user documentation is available at https://sylabs.io/guides/latest/user-guide. It describes the various ways to run programs inside a container and how to build your own container image.

Building a singularity image

Recent versions of Singularity allow building images without root access (see https://docs.sylabs.io/guides/latest/user-guide/fakeroot.html). However this has limitations, so it is better to build images as root. It can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k" :

Terminal.png node:
module load singularity && sudo-g5k $(which singularity) build mpi.sif mpi.def

For more information about building Singularity containers, see https://docs.sylabs.io/guides/latest/user-guide/build_a_container.html

Using docker containers with Singularity

Singularity can also be used to start docker containers. For instance:

Terminal.png node:
singularity run docker://debian

Running singularity containers in OAR submission

Singularity containers can also be run in a OAR submission (none-interactive batch job). For instance:

Terminal.png frontend:
oarsub -l core=1 "module load singularity && singularity run library://sylabsed/examples/lolcow"

Running MPI programs in Singularity containers

MPI programs may be run in Singularity containers, by leveraging both the MPI implementation available in the host, i.e. a Grid'5000 physical node (which has a direct access to the high peformance network hardware if present), and the MPI library that must be installed inside the container.

MPI programs in the Singularity container can then be started using the the mpirun command on the host.

See https://sylabs.io/guides/latest/user-guide/mpi.html for more information.

For instance, to submit such a MPI job under OAR, assuming Singularity image named my_mpi_image.sif in your home directory, use:

Terminal.png frontend:
oarsub -l nodes=2 "module load singularity && mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- `which singularity` exec my_mpi_image.sif /opt/mpitest"

Using GPUs in Singularity containers

GPUs available in the host can be made available inside the container by using the --nv option (for Nvidia GPUs only).

For instance, to start an interactive tensorflow environment with one GPU, first submit the job reserving 1 GPU:

Terminal.png frontend:
oarsub -I -l gpu=1
Note.png Note

You may need to add "-q production" or "-t exotic" depending which GPU cluster you want to use

Then on that node:

Terminal.png node:
module load singularity
Terminal.png node:
singularity run --nv docker://tensorflow/tensorflow:latest-gpu

More info at: https://sylabs.io/guides/latest/user-guide/gpu.html

Using Apptainer (instead of Singularity)

As Sylabs forked the Singularity project without renaming their fork, the Singularity project decided to move into the Linux Foundation and rename their project Apptainer. See the official announcement for more information.

As Singularity, Apptainer is available through module. To use it, just load the module and execute your container:

Terminal.png node:
module load apptainer
Terminal.png node:
apptainer run docker://alpine

Example: Using Singularity to port a software environment between HPC infrastructures

Using Singularity is a good way to port software environments between HPC infrastructures, for example, between Grid'5000 and IDRIS' Jean Zay.

The following example describes how Singularity could be used together with Conda to share your software environment on two different HPC infrastructures (it is inspired from this stackoverflow question).

Step 1 - On Grid'5000, create a Docker container with your Conda environment

(based on the micromamba Quick Start guide)

Create a env.yaml file to describe your Conda environment:

name: base
channels:
  - conda-forge
dependencies:
  - tensorflow-gpu

Create a Dockerfile:

FROM mambaorg/micromamba:latest
COPY --chown=$MAMBA_USER:$MAMBA_USER env.yaml /tmp/env.yaml
RUN micromamba install -y -n base -f /tmp/env.yaml && \
    micromamba clean --all --yes

Create a Docker image using this environment: (see Docker)

Terminal.png node:
g5k-setup-docker
Terminal.png node:
docker build --tag my_app .

Check your docker image:

Terminal.png node:
docker run -it --rm my_app python3 -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
Step 2 - On Grid'5000, convert your Docker image to a Singularity image

Export your Docker image:

Terminal.png node:
docker save -o my_app.tar my_app

Convert it to a Singularity image:

Terminal.png node:
singularity build my_app.sif docker-archive://my_app.tar

Test your Singularity image:

$ singularity shell --nv my_app.sif 
Singularity> eval "$(micromamba shell hook --shell bash)"
Singularity> micromamba activate
(base) Singularity> python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
Num GPUs Available: 2
(base) Singularity>
Step 3 - Copy your Singularity image to Jean Zay (using scp/rsync) and run it there

Note that unless you added Grid'5000's external addresses to your IDRIS account as described in the FAQ, you must copy your image locally and then copy it to Jean Zay.

Specific information about running Singularity containers on Jean Zay is available at http://www.idris.fr/eng/jean-zay/cpu/jean-zay-utilisation-singularity-eng.html.

$ module load singularity
$ idrcontmgr cp my_app.sif 
1 file copied.
$ singularity shell --nv $SINGULARITY_ALLOWED_DIR/my_app.sif 
Singularity> eval "$(micromamba shell hook --shell bash)"
Singularity> micromamba activate
(base) Singularity> python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
Num GPUs Available: 1
(base) Singularity>