Bridges User Guide

 

Account Administration

Charging

Until June 1, 2017, allocations for Bridges  are given for "Bridges regular"  or "Bridges large", and "Bridges regular" includes the use of Bridges' GPU nodes.  After that time, the GPU nodes will be allocated and charged for separately.

Bridges regular 

The  RSM nodes are allocated as "Bridges regular".  Until June 1, 2017, this includes all Bridges' GPU nodes.  Service Units are defined in terms of compute resources: 

1 SU = 1 core-hour

Bridges large

 The LSM and ESM nodes are allocated as "Bridges large".  Service Units are defined in terms of memory requested:

1 SU = 1 TB-hour 

Bridges GPU

Until June 1, 2017, Bridges GPU nodes are part of "Bridges regular" allocations.  They are charged at the same rate as RSM nodes. Beginning June 1, 2017, Bridges GPU nodes will be allocated and charged for separately from the RSM nodes.

Bridges contains two kinds of GPU nodes: NVIDIA Tesla K80s and NVIDIA Tesla P100s. Because of the difference in the performance of the nodes, the charges will be different for the two types of nodes.

K80 nodes 

The K80 nodes hold 4 GPU units each, each of which can be allocated separately.  Service units (SUs) are defined in terms of GPU-hours:

1 GPU-hour = 1 SU

Note that the use of an entire K80 GPU node for one hour would be charged 4 SUs.

P100 nodes

The P100 nodes hold 2 GPU units each, which can be allocated separately.  Service units (SUs) are defined in terms of GPU-hours:

1 GPU-hour = 2.5 SUs

Note that the use of an entire P100 node for one hour would be charged 5 SUs.

 

Transferring Regular Memory SUs to GPU SUs

Beginning in early May, you can transfer some of your Regular Memory allocation to a GPU allocation.  To do this, submit a transfer request through the XSEDE User Portal. Instructions for submitting a transfer request are found here: https://portal.xsede.org/knowledge-base/-/kb/document/avva

 

Managing multiple grants

If you have more than one grant, be sure to charge your usage to the correct one.  Usage is tracked by group name.

Find your group names

To find your group names, use the id command.

id -Gn

will list all the groups you belong to.

Find your current group

id -gn

will list the group you associated with your current session.  

Change your default group

Your primary group is charged with all usage by default.  To change your primary group, the group to which your SLURM jobs are charged by default, use the change_primary_group command.  Type:

change_primary_group -l

to see all your groups.  Then type

change_primary_group groupname

to set groupname as your default group.

 

Charging for batch or interactive use

Batch jobs and interactive sessions are charged to your primary group by default.  To charge your usage to a different group, you must specify the appropriate group  with the -A groupname  option to the SLURM sbatch command.   See the Running Jobs section of this Guide for more information on batch jobs,  interactive sessions and SLURM.

 

Tracking your usage

There are several methods you can use to track your Bridges usage. The xdusage command is available on Bridges. There is a man page for xdusage. The projects  command will also help you keep tabs on your usage.  It shows grant  information, including usage and the pylon directories associated with the grant.

Type:

projects

 

For more detailed accounting data you can use the Grant Management System.   You can also track your usage through the XSEDE User Portal. The xdusage and projects command and the XSEDE Portal accurately reflect the impact of a Grant Renewal but the Grant Management System currently does not.

Managing your XSEDE allocation

Most account management functions for your XSEDE grant are handled through the XSEDE User Portal.  You can search the Knowledge Base to get  help.  Some common questions:

New on Bridges

GPUs to be allocated separately
Read more

Upgraded scratch file system installed
Read more

Omni-Path User Group

The Intel Omni-Path Architecture User Group is open to all interested users of Intel's Omni-Path technology.

More information on OPUG