Sample batch scripts for Bridges

Both sample batch scripts for some popular software packages and sample batch scripts for general use on Bridges are available.  

For more information on how to run a job on Bridges, what partitions are available, and how to submit a job, see the Running Jobs section of the Bridges User Guide.

Sample batch scripts for popular software packages

Sample scripts for some popular software packages are available on Bridges in the directory /opt/packages/examples.  There is a subdirectory for each package, which includes the script along with input data that is required and typical output.

See the documentation for a particular package for more information on using it and how to test any sample scripts that may be available. 

Sample batch scripts for common types of jobs

Sample Bridges batch scripts for common job types are given in this document. 

 Note that in each sample script:

  • The bash shell is used, indicated by the first line '!#/bin/bash'.  If you use a different shell some Unix commands will be different.
  • For username and groupname you must substitute your username and your appropriate Unix group.

 Sample scripts are available for

 

Sample script for OpenMP

#!/bin/bash
#SBATCH -N 1
#SBATCH -p RM
#SBATCH --ntasks-per-node 28 #SBATCH -t 5:00:00
# echo commands to stdout set -x
# move to your appropriate pylon5 directory
# this job assumes:
# - all input data is stored in this directory # - all output should be stored in this directory
cd /pylon5/groupname/username/path-to-directory
# run OpenMP program export OMP_NUM_THREADS=28 ./myopenmp

Notes:

        The --ntasks-per-node option indicates that you will use all 28 cores.

For groupname, username, and path-to-directory you must substitute your Unix group, username, and appropriate directory path.


 

Sample script for MPI

#!/bin/bash
#SBATCH -p RM
#SBATCH -t 5:00:00
#SBATCH -N 2
#SBATCH --ntasks-per-node 28
#echo commands to stdout set -x #move to your appropriate pylon5 directory cd /pylon5/groupname/username/path-to-directory
#set variable so that task placement works as expected
export  I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=0

#copy input files to LOCAL file storage srun -N $SLURM_JOB_NUM_NODES --ntasks-per-node=1 \
sh -c 'cp path-to-directory/input.${SLURM_PROCID} $LOCAL'
#run MPI program mpirun -np $SLURM_NTASKS ./mympi
# Copy output files to pylon5 srun -N $SLURM_JOB_NUM_NODES --ntasks-per-node=1 \
sh -c 'cp $LOCAL/output.* /pylon5/groupname/username/path-to-directory'

Notes:

The variable $SLURM_NTASKS gives the total number of cores requested in a job. In this example $SLURM_NTASKS will be 56 because  the -N option requested 2 nodes and the --ntasks-per-node option requested all 28 cores on each node.

The export command sets I_MPI_JOB_RESPECT_PROCESS_PLACEMENT so that your task placement settings are effective. Otherwise, the SLURM defaults are in effect.

The srun commands are used to copy files between pylon5 and the $LOCAL file systems on each of your nodes.

The first srun command assumes you have two files named input.0 and input.1 in your pylon5 file space. It will copy input.0 and input.1 to, respectively, the $LOCAL file systems on the first and second nodes allocated to your job.

The second srun command will copy files named output.* back from your $LOCAL file systems to your pylon5 file space before your job ends. In this command '*' functions as the usual Unix wildcard.

For groupname, username, and path-to-directory you must substitute your Unix group, username, and appropriate directory path.


 

Sample script for hybrid OpenMP-MPI

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=14
#SBATCH --time=00:10:00
#SBATCH --job-name=hybrid
cd $SLURM_SUBMIT_DIR

mpiifort -xHOST -O3 -qopenmp -mt_mpi hello_hybrid.f90 -o hello_hybrid.exe
# set variable so task placement works as expected export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=0
mpirun -print-rank-map -n $SLURM_NTASKS -genv \
OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK -genv I_MPI_PIN_DOMAIN=omp \
./hello_hybrid.exe

Notes:

This example asks for 2 nodes, 4 MPI tasks and 14 OpenMP threads per MPI task.

The export command sets I_MPI_JOB_RESPECT_PROCESS_PLACEMENT so that your task placement settings are effective. Otherwise, the SLURM defaults are in effect.


 

Sample script for job array

#!/bin/bash
#SBATCH -t 05:00:00
#SBATCH -p RM-shared
#SBATCH -N 1
#SBATCH --ntasks-per-node 5
#SBATCH --array=1-5

set -x

./myexecutable $SLURM_ARRAY_TASK_ID

Notes:

This script will generate five jobs that will each run on a separate core on the same node. The value of the variable SLURM_ARRAY_TASK_ID is the core number, which, in this example, will range from 1 to 5. Good candidates for job array jobs are jobs that can use only this core index to determine the different processing path for each job. For more information about job array jobs see the sbatch man page and the online SLURM documentation.


 

Sample script for bundling single-core jobs

#!/bin/bash
#SBATCH -N 1
#SBATCH -p RM-shared
#SBATCH -t 05:00:00
#SBATCH --ntasks-per-node 14
 
echo SLURM NTASKS: $SLURM_NTASK
i=0
while [ $i -lt $SLURM_NTASKS ]
do
numactl -C +$i ./run.sh &
let i=i+1
done
wait # IMPORTANT: wait for all to finish or get killed

Notes:

Bundling or packing multiple jobs in a single job can improve your turnaround and improve the performance of the SLURM scheduler.


 

Sample script for bundling multi-core jobs

#!/bin/bash
#SBATCH -N 1
#SBATCH -p RM-shared
#SBATCH -t 05:00:00
#SBATCH --ntasks-per-node 14
#SBATCH --cpus-per-task 2

echo SLURM NTASKS: $SLURM_NTASKS
i=0
while [ $i -lt $SLURM_NTASKS ]
do
numactl -C +$i ./run.sh &
let i=i+1
done wait # IMPORTANT: wait for all to finish or get killed

Notes:

Bundling or packing multiple jobs in a single job can improve your turnaround and improve the performance of the SLURM scheduler.


 

Sample batch script for GPU partition

#!/bin/bash
#SBATCH -N 2
#SBATCH -p GPU
#SBATCH --ntasks-per-node 28
#SBATCH -t 5:00:00
#SBATCH --gres=gpu:p100:2

#echo commands to stdout
set -x

#move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory cd /pylon5/groupname/username/path-to-directory

#run GPU program ./mygpu

Notes:The value of the --gres-gpu option indicates the type and number of GPUs you want.For path-to-directory you must substitute the appropriate directory path.


 

Sample batch script for GPU-shared partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p GPU-shared
#SBATCH --ntasks-per-node 7 #SBATCH --gres=gpu:p100:1 #SBATCH -t 5:00:00
#echo commands to stdout set -x
#move to working directory # this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
cd /pylon5/groupname/username/path-to-directory

#run GPU program ./mygpu

Notes:The option --gres-gpu indicates the number and type of GPUs you want. For path-to-directory you must substitute the appropriate directory path.

System Status

  • Bridges is Up

     

      Bridges is running normally.

New on Bridges

Bridges-AI Early User Program Guide available

The Bridges-AI early user program is now underway and providing access to 88 Tesla "Volta" GPUs.
Read more

The default version of Singularity is now 3.0.0.
Read more

Omni-Path User Group

The Intel Omni-Path Architecture User Group is open to all interested users of Intel's Omni-Path technology.

More information on OPUG