New on Bridges
Increased time limit for RM and RM-shared partitions
The maximum time for the RM and RM-shared partitions has been increased from 48 to 72 hours.
Compiler defaults changing
The default versions for the Intel and PGI compilers will change during the maintenance Oct. 23-24, and some older versions will be removed.
PGI version 17.5 and Intel version 184.108.40.206 will be removed . If you have been using one of these, you will have to choose a new one going forward.
Compiler defaults changed
The default version for some compilers has changed on Bridges. To see what versions are available, type
module avail compiler
module avail pgi
You will see a list of all available versions. The most recent version is the default you get if you do not ask for a specific version, e.g.,
module load pgi
To get a different version, load the specific version you want:
module load pgi/18.1
See the the module documentation for more information on using modules.
Bridges-AI early user program underway
With the installation of "Bridges AI" nodes, the early user program for testing their capabilities has started. "Bridges AI" nodes include nine Apollo 6500 severs holding 8 NVIDIA Tesla "Volta" GPUs each – the most powerful GPUs in the world— and a NVIDIA DGX-2, which tightly integrates 16 “Volta” GPUs using the world’s highest-bandwidth on-node switch, the NVSwitch. When used as a single processing unit, the DGX-2 provides 2 petaflops of peak performance.
If you have questions about the early user program, email email@example.com. For access after the early user program, submit a request for allocation through the XSEDE portal. Bridges-AI is available starting January 2019.
See the Bridges-AI early user guide for information on using these nodes during the early user period.
Singularity 3.0 now the default
Singularity v 3.0.0 has been installed on Bridges and made the default. You can still load other versions by using the full module name when using the
module load command.
GPU limit for jobs set to 8
Due to demand for Bridges' GPU nodes, the maximum number of GPUs allowed to a single job has been set to 8. This means you can request at most 4 P100 nodes or 2 K80 nodes in a job. Job parameters are constantly evaluated and can change to provide the best experience for Bridges' users.
For more information on submitting jobs and the limits set on Bridges' partitions, see the Running Jobs section of the Bridges User Guide.
Pylon2 has been discontinued. Please use pylon5 for all your file storage.
The pylon2 filesystem will be discontinued on June 19, 2018. If you have not already done so, move your files from pylon2 now.
Instructions for moving files from pylon2 to pylon5 are given below.
Recent filesystem updates have resulted in increased capacity and improved performance of the pylon5 filesystem. To support this upgraded environment, changes to the pylon filesystems have been made:
- The pylon5 file wiper was discontinued on March 1, 2018. The wiper had targeted files older than 30 days.
- Pylon5 is now the recommended storage system for all of your Bridges file needs, both short and long term. Please direct all jobs to write to pylon5 and transfer any new files to this file system as well.
- Pylon2 has been unmounted from the Bridges login nodes. This means that you cannot see or access any pylon2 directory from a login node. You will get a "No such file or directory" error if you try. Use rsync or globus to access your pylon2 directory.
Once we are confident that users have updated their job scripts and are using pylon5, we will phase out pylon2. This will allow us to reclaim the pylon2 storage space and add it to the new and improved pylon5.
If you have questions or run into any issues in modifying your job scripts or moving your files from pylon2 to pylon5, please let us know by emailing firstname.lastname@example.org.