Pittsburgh Supercomputing Center 1986-2016

Thirty years have gone by since June 1986 when PSC opened its doors. From megaflops and megabytes to petaflops and petabytes, one generation after another, it’s been an evolution of more memory and faster processing to better data storage and accessibility. And we’ve been at the forefront of it all. Since 1986, more than 6,600 principal scientists and engineers (totaling 10,824 grants and 32,701 users) at nearly 1,500 affiliations and research centers in 53 states and territories have used PSC computing resources.

Our collaborations with major institutions and vendors enable us to continue to promote and support innovative projects and programs that have a national and international impact on our lives and the lives of future generations. Our yearly, now biannual, summaries of research and workforce development that highlight the center’s accomplishments over the last 30 years continue to address questions of broad social impact.

Join us in celebrating 30 years of people, science and collaboration!

 

PSC 30-Year Machine Timeline

CRAY X-MP/48

The original machine at Pittsburgh Supercomputing Center, the X-MP could perform up to 840 million arithmetic operations every second. It had eight million words of memory and was connected to sixteen DD-49s. It was also equipped with a 128 million word SSD (solid state storage device). This SSD could transfer data to the main processor 100 times faster than the disks, and effectively expanded the X-MP's memory to 128 million words. The X-MP had four independent processors, each of which had fourteen independent functional units.

CRAY X-MP 1986-1989

PSC supported product development impacts as Alcoa used PSC's first supercomputer to improve the design of the lightweight aluminum can.

CRAY Y-MP

Three times the power and four times the memory of the X-MP, this machine was the next big thing in supercomputing. The Y-MP retained software compatibility with the X-MP, but extended the address registers from 24 to 32 bits. High-density VLSI ECL technology was used and a new liquid cooling system was devised. The Y-MP ran the Cray UNICOS operating system. The Y-MP could be equipped with two, four or eight vector processors, with two functional units each and a clock cycle time of 6 ns (167 MHz). Peak performance was thus 333 megaflops per processor. Main memory comprised 128, 256 or 512 MB of SRAM.

Cray Y-MP 1989-1993

CM-2/CM-5 (Connection Machines)

The CM-2 was the first major component for PSC's plans for heterogeneous computing.  With 32,000 separate processing units, the CM-2 was a "massively parallel" computer, in a sense the antithesis of the Cray Y-MP, which had 8 extremely powerful processors.  Each CM-2 processor was less powerful than a personal computer, but for appropriate problems it attained supercomputer performance via the team approach: All 32,000 processors could compute simultaneously, working on independent segments of the job.  

CM-2 1990-2000

Mario (CRAY C90)

The PSC's CRAY C90 (or, more correctly, C916/512), nicknamed Mario, ran UNICOS, based on AT&T UNIX System V, with Berkeley extensions and Cray Research, Inc. enhancements. Compared to the Y-MP, its predecessor at PSC, the C90 processor had a dual vector pipeline and a faster 4.1 ns clock cycle (244 MHz), which together gave three times the performance of the Y-MP processor. The maximum number of processors in a system was also doubled from eight to 16. The C90 series used the same Model E IOS (Input/Output Subsystem) and UNICOS operating system as the earlier Y-MP Model E.

PSC was the first non-government site in the U.S. to receive a Cray C90.

Cray C90 1992-1999

PSC’s impact in biomedical science was marked with the first realistic 3D model of blood flow in a human heart which led to the design of the artificial heart valve.

CRAY T3D

The CRAY T3D system was the first in a series of massively parallel processing (MPP) systems from CRAY Research. PSC's T3D prototype machine was tightly coupled to CRAY Y-MP and C90 systems through a high speed channel, creating a powerful heterogeneous environment.

CRAY T3D 1993-1999

PSC has had a groundbreaking impact on public safety with the first model of storm conditions that spontaneously produced tornados, launching the lifesaving predictive field of meteorology.

Jaromir (CRAY T3E)

PSC's T3E, nicknamed Jaromir, was the first production T3E shipped from Cray Research, Inc. The T3E was a scalable, massively parallel distributed memory machine using a 3D torus topology interconnection network. The initial 256-processor configuration expanded to 512 processors. Over the span of its service Jaromir provided 25 million CPU hours to more than 3,000 researchers.

CRAY T3E 1996-2004

Lemieux (TCS)

The Terascale Computing System, also known as Lemieux, comprised 610 Compaq Alphaserver ES45 nodes and two separate front end nodes. Each computational node was a 4 processor SMP, with 1-GHz Alpha EV68 processors and 4 Gbytes of memory.  A dual-rail Quadrics interconnect linked the nodes.

Lemieux was primarily intended to run applications with very high levels of parallelism or concurrency (512 - 2048 processors).

At the time of its installation in 2001, Lemieux was the most powerful system in the world committed to unclassified research.

LEMIEUX 2001-2006

Seminal research showed how cell membrane protein-aquaporin only allows water to pass through a cell. Cited by 2003 Nobel Prize recipient.

Rachel & Jonas (Marvel)

Jonas, named for Jonas Salk, and Rachel, named for Rachel Carson, were GS1280 AlphaServers from Hewlett-Packard. They had a shared memory architecture and exceptional "memory bandwidth" (the speed at which data transfers between hardware memory and the processor) — five to ten times greater than comparable systems of the time. Rachel and Jonas were among the first GS1280s to roll out of HP production.

When they arrived at PSC in 2003, each had 32 Gbytes of shared memory and 16 EV7 processors. By 2008, the Rachel and Jonas systems had each grown to be a loosely coupled set of machines. Each machine held 64 processors and 256 Gbytes of shared memory. Jonas was dedicated to biomedical research, while Rachel supported NSF science and engineering.

Marvels 2003-2008

BigBen (CRAY XT3)

Nicknamed BigBen, the Cray XT3 MPP system had 2068 compute nodes linked by a custom-designed interconnect. Each node contained one dual-core 2.6 GHz AMD Opteron processor (model 285). Each core had its own cache, but the two cores on a node shared 2 Gbytes of memory and a network connection. Nineteen dedicated IO nodes were also connected to this network.

BigBen was primarily intended to run applications with very high levels of parallelism or concurrency (512-4136 cores).

BigBen 2005-2010

BigBen screened protochromic substances for PPG, which beat the competition in delivering a fifth-generation light-responsive leans coating known as the transition lens.

Pople & Salk (SGI Altix 4700)

Pople was an SGI Altix 4700 shared-memory NUMA system comprising 192 blades. Each blade held 2 Itanium2 Montvale 9130M dual-core processors, for a total of 768 cores. Each core had a clock rate of 1.66 GHz and could perform 4 floating point operations per clock cycle, bringing the total floating point capability of Pople to 5.1 Tflops.

The four cores on each blade shared 8 Gbytes of local memory. The processors were connected by a NUMAlink interconnect. Through this interconnect, the local memory on each processor was accessible to all the other processors on the system.

Salk was an SGI Altix 4700 shared-memory NUMA system dedicated to biomedical research.  It comprised 36 blades; each blade held 2 Itanium2 Montvale 9130M dual-core processors, for a total of 144 cores.

Pople & Salk 2005-2010

PSC made a cybersecurity impact in a call for better internet security as Pople showed individual social security numbers could be guessed from public information available on the Web.

Warhol (HP)

An important resource for researchers in Pennsylvania, Warhol was an 8-node Hewlett-Packard BladeSystem c3000. Each node had 2 Intel E5440 quad-core 2.83 GHz processors, for a total of 64 cores on the machine. The 8 cores on a node shared 16 Gbytes of memory, and the nodes were interconnected by an InfiniBand communications link. 

WARHOL 2009-2013

Blacklight (SGI Altix UV 1000)

Blacklight was an SGI Altix UV 1000 supercomputer designed for memory-limited scientific applications in fields as different as biology, chemistry, cosmology, machine learning and economics. Funded by the National Science Foundation (NSF), Blacklight carried out this mission with partitions with as much as 16 Terabytes of coherent shared memory.

Blacklight 2010-2015

PSC's Blacklight helped researchers enable better organ exchange programs by calculating optimum donor exchanges to expand the number of donor/recipient pairs and broaden the criteria for matches so more hard-to-match patients could find donors.

Bridges and beyond...

Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. Bridges is designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. Its richly-connected set of interacting systems offers exceptional flexibility for data analytics, simulation, workflows and gateways, leveraging interactivity, parallel computing, Spark and Hadoop.

Learn more about Bridges 2016 -

 

PSC Media Contacts

Media / Press Contact(s):

Kenneth Chiacchia
Pittsburgh Supercomputing Center
chiacchi@psc.edu
412-268-5869

Vivian Benton
Pittsburgh Supercomputing Center
benton@psc.edu
412.268.4960

Website Contact

Shandra Williams
Pittsburgh Supercomputing Center
shandraw@psc.edu
412.268.4960

Use of PSC materials: To request permission to use PSC materials, please complete this form.