Pittsburgh Supercomputing Center 

Advancing the state-of-the-art in high-performance computing,
communications and data analytics.


Allocation requests now being accepted.     More information  

PSC's Computational Resources


This page lists PSC's supercomputing-class computational resources. For the data management options PSC provides, see the storage resources page.


Greenfield  is designed for memory-limited scientific applications in fields as different as biology, chemistry, cosmology, machine learning and economics. Funded by the National Science Foundation (NSF), Greenfield comprises 360 cores and 18TB of memory in three nodes: two HP DL580s and an HP SuperDome X. 


Anton is a special purpose supercomputer designed to dramatically increase the speed of molecular dynamics simulations, allowing biomedical researchers to understand the motions and interactions of proteins and other biologically important molecules over longer time periods than previously possible. Designed and built by D.E. Shaw Research (DRES), the Anton machine hosted at PSC was provided without cost by DESRES for non-commerical use by the national biomedical research community.


BioU is a bioinformatics educational resource funded by the NIH. It provides a stable environment in which classroom and individualized research training can occur. Small research projects, such as individualized class projects, graduate student projects, and many typical academic bioinformatics projects can be hosted on BioU. Projects requiring significant computational resources should be carried out on other, larger, PSC computing platforms.


Bridges 4c stackedBridges - coming soon

Bridges is a new concept in HPC - a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. It is a richly connected set of interacting systems offering a flexible mix of gateways (web portals), Hadoop and Spark ecosystems, batch processing and interactivity. Bridges will include:

  • Compute nodes with hardware-supported shared memory ranging from 128GB to 12TB per node to support genomics, machine learning, graph analytics and other fields where partitioning data is impractical
  • GPU nodes to accelerate applications as diverse as machine learning, image processing and materials science
  • Database nodes to drive gateways and workflows and to support fusion, analytics, integration and data management
  • Webserver nodes to host gateways and provide access to community datasets
  • Data transfer nodes with 10 GigE connections to enable data movement between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure

Phase 1 of Bridges is planned to be available in January 2016.  Startup requests are being accepted now.