Bridges to Bridges-2 Migration Guide

You will find working on  Bridges-2 to be similar to working on Bridges.  This document provides guidance on moving to Bridges-2 and  some differences you should be familiar with. Be sure to check the Bridges-2 User Guide for full information.


Transferring files to Bridges-2

You can use any of the file transfer methods outlined in the Transferring Files section of the Bridges-2 User  Guide to move files into Bridges-2, but consider these recommendations:

To copy files from the pylon5 filesystem on Bridges to the Ocean filesystem on Bridges-2, PSC has created a tool, filemover, to help.  Filemover will copy entire directories from pylon5 to the corresponding Ocean directory in Bridges-2.  For more information, see the filemover documentation.

To copy files from your home directory on Bridges, a simple rsync command should work well.  A sample command to transfer your entire Bridges home directory tree into a subdirectory named “from-bridges” of your Bridges-2 home directory is given here.  Run this command from Bridges-2.

rsync  -zvrh  $HOME/from-bridges


Node differences

Bridges-2 has the same types of compute nodes as you found on Bridges, but the nodes are more powerful. You should be aware of how many cores or GPUs you are requesting, as that will affect the SUs a job consumes.  Some differences are:

  • Bridges-2 RM nodes have 128 cores each, Bridges RM nodes had 28 cores each
  • Bridges-2 RM nodes have at least 256GB of RAM, and some have 512GB; Bridges had 128GB
  • Bridges-2 EM nodes with 4TB RAM replace Bridges LM nodes with either 3TB or 12TB
  • Bridges-2 GPU nodes are all Volta architecture; Bridges contained some K80 and P100 architecture GPUs
  • Bridges-2 GPU nodes all have 8 GPUs, some Bridges GPU nodes had only 2 GPUs
  • Bridges-2 GPU nodes have 512GB RAM, Bridges GPU nodes had 128GB or 192 GB
  • Note that soon after Bridges is decommissioned, its GPU-AI resources will be migrated to Bridges-2. Watch for an announcement.


Charging algorithms

On the RM nodes, SUs are still calculated by core-hours as they were on Bridges.  Be aware, however, that since the Bridges-2 RM nodes have 128 cores, using an entire node for one hour will generate 128 SUs.

On the EM nodes, SUs are now calculated by core-hours, instead of by TB-hours as they were on Bridges.

On the GPU nodes, SUs are still calculated by GPU-hours.  Be aware that all of Bridges-2 GPU nodes contain 8 GPUs, while some of Bridges only had 2.


Shared file space for grant members

All grants on Bridges-2 are given a shared directory, open to all who are on the grant. This directory is /ocean/projects/groupname/shared, where groupname is the project id for your grant.


Module environment

The module environment has been updated.  In particular, the  command  module spider topic  will list all modules pertaining to topic along with some descriptive help information for each.

Module names do not always reflect if they were built for CPU or GPU use.  Be sure to check the help text by typing module help module-name when you need to know this.

Module names now include the compiler type and its version when relevant. For example, modules are available for mvapich built with either the GNU or the intel compilers.

[user@bridges2-login013 ~]$ module avail mvapich

------------------------------- /opt/modulefiles -------------------------------
   mvapich2/2.3.5-gcc8.3.1    mvapich2/2.3.5-intel20.4 (D)

   D:  Default Module



$PROJECT space

You have persistent storage space in the Ocean file system at /ocean/projects/groupname/username, accessible by the environment variable $PROJECT.  $SCRATCH is not defined on Bridges-2.


Porting your job scripts

There are several things you should be aware of when porting your job scripts to Bridges-2.


Be sure to change any explicit file paths you in your job scripts from the Bridges pylon5 filesystem to use Bridges-2 Ocean filesystem.

Replace any references to $SCRATCH, as $SCRATCH is not defined on Bridges-2.

GPU use

The flags used to request GPUs have changed.

In an interactive session, use –gres=gpu:n, where n is the number of GPUs to use per node.

In a batch job, use –gpus=n,  where n is the number of GPUs to use per node.


Check for the correct names of modules, as they will be different from the Bridges modules.  See the section on Module Environment in this document for more information.


Account administration

Charge ids

On Bridges-2, most charge ids are identical to the grant number. This may not be the case for some special types of grants. You can confirm your charge id for any grant with the projects command.

projects command

A new option to the projects command, -s, will show a condensed version of the output of the projects command, and may be all the information that you need.

my_quotas command

A new command, my_quotas, shows the Ocean quota for all of your grants.


Getting help

Questions about Bridges-2 usage should be sent to