New on Bridges
Singularity 3.0 now the default
Singularity v 3.0.0 has been installed on Bridges and made the default. You can still load other versions by using the full module name when using the
module load command.
GPU limit for jobs set to 8
Due to demand for Bridges' GPU nodes, the maximum number of GPUs allowed to a single job has been set to 8. This means you can request at most 4 P100 nodes or 2 K80 nodes in a job. Job parameters are constantly evaluated and can change to provide the best experience for Bridges' users.
For more information on submitting jobs and the limits set on Bridges' partitions, see the Running Jobs section of the Bridges User Guide.
Pylon2 has been discontinued. Please use pylon5 for all your file storage.
The pylon2 filesystem will be discontinued on June 19, 2018. If you have not already done so, move your files from pylon2 now.
Instructions for moving files from pylon2 to pylon5 are given below.
Recent filesystem updates have resulted in increased capacity and improved performance of the pylon5 filesystem. To support this upgraded environment, changes to the pylon filesystems have been made:
- The pylon5 file wiper was discontinued on March 1, 2018. The wiper had targeted files older than 30 days.
- Pylon5 is now the recommended storage system for all of your Bridges file needs, both short and long term. Please direct all jobs to write to pylon5 and transfer any new files to this file system as well.
- Pylon2 has been unmounted from the Bridges login nodes. This means that you cannot see or access any pylon2 directory from a login node. You will get a "No such file or directory" error if you try. Use rsync or globus to access your pylon2 directory. Details are below.
Once we are confident that users have updated their job scripts and are using pylon5, we will phase out pylon2. This will allow us to reclaim the pylon2 storage space and add it to the new and improved pylon5.
If you have questions or run into any issues in modifying your job scripts or moving your files from pylon2 to pylon5, please let us know by emailing firstname.lastname@example.org.
Moving files from pylon2 to pylon5
You can use the
rsync command or the Globus web application to transfer your files from pylon2 to pylon5. We suggest you use rsync.
Remember to delete your pylon2 files once your transfers have finished.
rsync command can be run on Bridges' compute nodes in an interactive session or a batch job, or by using ssh on one of Bridges' high-speed data transfer nodes. An advantage of
rsync is that if the transfer does not complete, you can rerun the
rsync command, and
rsync will copy only those files which have not already been transferred.
PSC has created a shell script that you can use to move files from pylon2 to pylon5. The shell script can be used in an interactive session or in a batch job.
rsync will overwrite files in the destination directory if a file of the same name with a more recent modified time exists in the source directory. To prevent this, the examples below copy pylon2 files into a new subdirectory on pylon5. Once transferred, please examine the files and move them to the directory where you want them.
In all the examples given here, change groupname, username and new-directory to be your charging group, userid and name of the new subdirectory (if you like) to store the files.
The PSC-provided shell script is /opt/packages/utilities/pylon2to5. It will:
- Copy all files from your pylon2 home directory (/pylon2/groupname/username) to a subdirectory named "from_pylon2" under your pylon5 home directory (/pylon5/groupname/username/from_pylon2)
- Loop until it succeeds for all files or is killed (e.g. due to timeout). This could use a lot of SUs if failures persist.
- Skip copying older files from pylon2 on top of newer files in pylon5 with the same name. This is default rsync behavior.
Start an interactive session, type
To use the PSC-supplied shell script, when your interactive session begins type
If you prefer not to use the PSC-supplied shell script, when your session begins you can use a command like
rsync -av /pylon2/groupname/username/ /pylon5/groupname/username/new-directory
To run the PSC-supplied shell script in a batch job, create a batch script with the following content:
#SBATCH -c 4
#SBATCH -p RM-shared
## newgrp my_other_grant
## The next line runs the shell script
To move files from a grant that is not your default, uncomment the
newgrp command in the script by removing "##" and substitute the correct group name for "my_other_grant".
If you prefer not to use the PSC-supplied shell script, you can create a batch script which runs rsync like the one shown here. Change groupname, username and new-directory to be your charging group, userid and name of the new subdirectory (if you like) to store the files.
#!/bin/bash #SBATCH -p RM-shared rsync -av /pylon2/groupname/username/ /pylon5/groupname/username/new-directory
Submit your batch script by typing (where script-name is the name of your script file)
Data transfer node
Use Bridges' high-speed data transfer nodes to move your files from pylon2 to pylon5. At the Bridges' prompt, type
ssh data.bridges.psc.edu "rsync -av /pylon2/groupname/username/ /pylon5/groupname/username/new-directory"
To use the Globus web application, visit www.globus.org.
- Choose “PSC Bridges with XSEDE Authentication” as each endpoint (you will need to authenticate with your XSEDE login and password).
- For the first path, choose the pylon2 directory that you wish to transfer files from. For example: /pylon2/chargeid/userid
- For the second path, choose the appropriate pylon5 target directory: /pylon5/chargeid/userid
To find your chargeid on Bridges, use the
projects command to see all of the allocations that you have access to.
- At the bottom of the Globus transfer page, choose the Transfer Settings that you wish to use (e.g. “preserve source file modification times”) and transfer your files as you would through any other web application.
If you need interactive access to pylon2, you can get an 8 hour session by typing one of the following commands.
To access a Regular Memory node:
interact -t 08:00:00
To access a Large Memory node:
interact -p LM --mem=256G -t 08:00:00
To access a GPU node:
interact --gpu -t 08:00:00