Singularity: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks under containers and is compatible with docker images. More info at: https://sylabs.io/docs/
Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks in containers and is compatible with docker images. More info at: https://sylabs.io/docs/


Grid'5000 supports Singularity container platform: It is available in the standard environment and does not requires to be root.
Grid'5000 supports the Singularity containers: Singularity is available in the standard environment and it does not requires to run it as root.


== Basic usage ==
== Basic usage ==


Just run the "singularity" command to use it. It can also be run under an OAR submitted job, for instance:
Just run the "singularity" command to use it. It can also be run in a OAR submission (none-interactive batch job). For instance:


  oarsub -l core=1 "/grid5000/code/bin/singularity run library://sylabsed/examples/lolcow"


(full path to /grid5000/code/bin/singularity is required for non-interactive job)
{{Term|location=frontend| cmd=<code class="command">oarsub</code> -l core=1 "<code class="replace">/grid5000/code/bin/singularity run library://sylabsed/examples/lolcow</code>"}}


Singularity user documentation is available at https://sylabs.io/guides/3.5/user-guide/index.html. It describes various ways to run programs inside a container and how to build your own container image (which requires to be root, but can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k").
{{Note|text=The full path to <code class="command">/grid5000/code/bin/singularity</code> is required for non-interactive OAR jobs}}


== Running MPI programs under Singularity containers ==
The Singularity user documentation is available at https://sylabs.io/guides/3.5/user-guide. It describes the various ways to run programs inside a container and how to build your own container image (which requires to be root, but can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k").


MPI programs may be run under Singularity containers, by relying on both MPI implementation available on the host, i.e. Grid'5000 physical nodes (which provides access to high peformance network hardware if present), and MPI library that must be installed inside the container.
== Running MPI programs in Singularity containers ==


MPI programs under Singularity can then be started by the host's mpirun.
MPI programs may be run in Singularity containers, by leveraging both the MPI implementation available in the host, i.e. a Grid'5000 physical node (which has a direct access to the high peformance network hardware if present), and the MPI library that must be installed inside the container.
 
MPI programs in the Singularity container can then be started using the the mpirun command on the host.


See https://sylabs.io/guides/3.5/user-guide/mpi.html for more information.
See https://sylabs.io/guides/3.5/user-guide/mpi.html for more information.


For instance, to submit such MPI job under OAR, use:
For instance, to submit such a MPI job under OAR, use:
 
{{Term|location=frontend| cmd=<code class="command">oarsub</code> -l nodes=2 "<code class="replace">mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest</code>"}}


  oarsub -l nodes=2 "mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest"
== Using GPUs in Singularity containers ==


== Using GPU under Singularity containers ==
GPUs available in the host can be made available inside the container by using the '''--nv''' option (for Nvidia GPUs only).


GPU available on the host can be made available inside the container by using the '--nv' option (for NVidia GPUs).
For instance, to start an interactive tensorflow environment with one GPU, first submit the job reserving 1 GPU:


For instance, to start an interactive tensorflow environment with one GPU, use:
{{Term|location=frontend| cmd=<code class="command">oarsub</code> -I <code class="replace">-l gpu=1</code>}}


  frontend: oarsub -I -l gpu=1
Then on that node:


  node: singularity run --nv docker://tensorflow/tensorflow:latest-gpu
{{Term|location=node| cmd=<code class="command">singularity</code> run <code class="replace">--nv docker://tensorflow/tensorflow:latest-gpu</code>}}


More info at: https://sylabs.io/guides/3.5/user-guide/gpu.html
More info at: https://sylabs.io/guides/3.5/user-guide/gpu.html
Line 39: Line 42:
== Using docker containers with Singularity ==
== Using docker containers with Singularity ==


Singularity can also be used to start docker containers. For example:
Singularity can also be used to start docker containers. For instance:


  node: singularity shell docker://gentoo/stage3-amd64
{{Term|location=node| cmd=<code class="command">singularity</code> shell <code class="replace">docker://gentoo/stage3-amd64</code>}}

Revision as of 16:05, 22 April 2020

Singularity is a popular container solution for HPC systems. It natively supports GPU and high performance networks in containers and is compatible with docker images. More info at: https://sylabs.io/docs/

Grid'5000 supports the Singularity containers: Singularity is available in the standard environment and it does not requires to run it as root.

Basic usage

Just run the "singularity" command to use it. It can also be run in a OAR submission (none-interactive batch job). For instance:


Terminal.png frontend:
oarsub -l core=1 "/grid5000/code/bin/singularity run library://sylabsed/examples/lolcow"
Note.png Note

The full path to /grid5000/code/bin/singularity is required for non-interactive OAR jobs

The Singularity user documentation is available at https://sylabs.io/guides/3.5/user-guide. It describes the various ways to run programs inside a container and how to build your own container image (which requires to be root, but can be performed on your own laptop or on a Grid'5000 node using "sudo-g5k").

Running MPI programs in Singularity containers

MPI programs may be run in Singularity containers, by leveraging both the MPI implementation available in the host, i.e. a Grid'5000 physical node (which has a direct access to the high peformance network hardware if present), and the MPI library that must be installed inside the container.

MPI programs in the Singularity container can then be started using the the mpirun command on the host.

See https://sylabs.io/guides/3.5/user-guide/mpi.html for more information.

For instance, to submit such a MPI job under OAR, use:

Terminal.png frontend:
oarsub -l nodes=2 "mpirun -hostfile \$OAR_NODE_FILE --mca orte_rsh_agent oarsh -- /grid5000/code/bin/singularity exec my_mpi_image.sif /opt/mpitest"

Using GPUs in Singularity containers

GPUs available in the host can be made available inside the container by using the --nv option (for Nvidia GPUs only).

For instance, to start an interactive tensorflow environment with one GPU, first submit the job reserving 1 GPU:

Terminal.png frontend:
oarsub -I -l gpu=1

Then on that node:

Terminal.png node:
singularity run --nv docker://tensorflow/tensorflow:latest-gpu

More info at: https://sylabs.io/guides/3.5/user-guide/gpu.html

Using docker containers with Singularity

Singularity can also be used to start docker containers. For instance:

Terminal.png node:
singularity shell docker://gentoo/stage3-amd64