Singularity

Singularity is container software written at Lawrence Berkeley Labs.

If you need a specialized computing environment, you can use a Singularity container on Bridges.   Your Singularity container will execute on Bridges's compute nodes and can use other Bridges resources, including pylon filesystems. Within your container you can use a different Unix operating system and any software you need. You can set up your Singularity container without any intervention from PSC staff.

 If you have questions about using Singularity containers on Bridges send email to bridges@psc.edu. 

Documentation

Usage

You can use the Singularity containers that are installed on Bridges or you can bring your own. Execute the container on Bridges' compute nodes; containers cannot be executed on the front end nodes.

 

Singularity containers installed on Bridges

Many Singularity containers from the NVIDIA GPU Cloud (NGC), a GPU-accelerated cloud platform optimized for deep learning and scientific computing, are available on Bridges. NVIDIA optimizes the containers for Volta, including rigorous quality assurance.

The containers on the NGC Registry are Docker images, but we have converted many of them to Singularity for you to use on Bridges-AI. These containers may be run on Bridges-AI nodes or on Bridges’ NVIDIA Tesla P100 GPUs, but they are not compatible with Bridges’ Tesla K80 GPUs.

NVIDIA requests that you create an account at http://ngc.nvidia.com if you will use any of these containers.

This table lists the containers are installed on Bridges. Multiple versions of each are available, which vary in the versions of the software in them.  Check the contents of the appropriate directory to see the alternatives.  For example, to see the tensorflow images available, type

ls /pylon5/containers/ngc/tensorflow

In this case, the output shows that there are four images are available. Note that there are two containers built with python2 and two built with python3.

18.09-py2.simg  18.10-py2.simg 
18.09-py3.simg  18.10-py3.simg

 

 

PackagePath on BridgesNVIDIA Documentation
Caffe /pylon5/containers/ngc/caffe https://ngc.nvidia.com/registry/nvidia-caffe
Caffe2 /pylon5/containers/ngc/caffe2 https://ngc.nvidia.com/registry/nvidia-caffe2
CNTK /pylon5/containers/ngc/cntk https://ngc.nvidia.com/registry/nvidia-cntk
DIGITS /pylon5/containers/ngc/digits https://ngc.nvidia.com/registry/nvidia-digits
Inference Server /pylon5/containers/ngc/inferenceserver  https://ngc.nvidia.com/registry/nvidia-inferenceserver
MATLAB /pylon5/containers/mdl  
MXNet /pylon5/containers/ngc/mxnet https://ngc.nvidia.com/registry/nvidia-mxnet
PyTorch /pylon5/containers/ngc/pytorch https://ngc.nvidia.com/registry/nvidia-pytorch
TensorFlow /pylon5/containers/ngc/tensorflow https://ngc.nvidia.com/registry/nvidia-tensorflow
TensorRT /pylon5/containers/ngc/tensorrt https://ngc.nvidia.com/registry/nvidia-tensorrt
TensorRT Inference Server /pylon5/containers/ngc/tensorrtserver https://ngc.nvidia.com/registry/nvidia-tensorrtserver
Theano /pylon5/containers/ngc/theano https://ngc.nvidia.com/registry/nvidia-theano
Torch /pylon5/containers/ngc/torch https://ngc.nvidia.com/registry/nvidia-torch

 

 For details on the installed containers and the software each contains, see Singularity images on Bridges.

 

Bring your own Singularity container

Singularity containers cannot be built on Bridges. There are several ways to get a Singularity container onto Bridges. 

  • You can build a Singularity container on your local system and copy it to Bridges.
  • You can convert a Docker container to Singularity on your local system and copy it to Bridges.
  • You can copy a Docker container to Bridges and convert it to Singularity.
  • You can copy a Singularity container from a container registry to Bridges. 

Build a container on your local system

 You can build your Singularity container on your local system, installing the singularity program if you need to.  For more information on how to do this, see the Singularity web site.   Copy your container to Bridges using the usual file transfer methods. See the Transferring files section of this User Guide for more information.

Convert a Docker container

Bridges does not support Docker. If you have a Docker container you can convert it to a Singularity container and use that on Bridges.

On your local system, you can use a utility like docker2singularity to convert a Docker container.  Copy your single file to Bridges using the usual file transfer methods. See the Transferring files section of this User Guide for more information.

On Bridges, you can convert a Docker container in an interactive session or batch job by loading the Singularity module and using the singularity build command.

To convert an NGC Docker container to Singularity, use these commands in an interactive session or in a batch script.

source /etc/profile.d/modules.sh
module load singularity
export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
export SINGULARITY_DOCKER_PASSWORD=your-key-string
export SINGULARITY_CACHEDIR=$SCRATCH/.singularity
singularity build $SCRATCH/new-container.simg docker://nvcr.io/nvidia/old-container

where the username and password credentials are those from NVIDIA when you register at http://ngc.nvidia.com

 

Copy a container from a registry

Use the singularity pull command on Bridges to copy a pre-existing container from a container registry.  If you use the singularity pull command to copy a Docker container, it will be converted to Singularity during the pull process.

To copy a container from a registry:

  1. Log in to Bridges
  2. Load the singularity module with
    module load singularity
  3. Copy the container you want with the singularity pull command.

More details on this can be found in the Singularity User Guide.

Examples of  container registries include the NVIDIA GPU Cloud registry (https://ngc.nvidia.com/registry) and  https://biocontainers.pro/registry/#/.

 

Execute your Singularity container 

Once your container is on Bridges you can run it. Containers must run on Bridges' compute nodes, not on the front ends.  Use either the interact or sbatch command to get access to a compute node.  See the Running Jobs section of this User Guide for more information on SLURM and using Bridges' compute nodes.

Inside your interactive session or your batch job  you must first issue the command

module load singularity

Then you can use the singularity commands to execute your container.  If you are running on a GPU, use the --nv switch to enable CUDA support!  Without it, you cannot use CUDA. 

For example, to start Singularity and then fire up a shell, type

singularity shell --nv singularity-container-name.simg

 where 

singularity-container-name is the container you wish to use

 

Alternately, you can create a bash shell script and run it inside of the container. To do so, once your interactive session has started, type

singularity exec --nv singularity-container-name.simg  bash_script.sh

where: 

singularity-container-name is the container you wish to use

bash_script.sh is your bash script

 

Common Singularity commands

Some common commands are listed here. For more information about Singularity, see the Singularity web site

shell
Start a shell within your container using the operating system you have set up your container to use.
exec
Run a single command within your container.
run
Run a recipe script you have set up within your container. Using a recipe script forces users of your container to use a pre-established workflow.
help
Provides help on Singularity

User Information

Passwords
Connect to PSC systems:
Policies
Technical questions:

Send mail to remarks@psc.edu or call the PSC hotline: 412-268-6350.