Salk was an SGI Altix 4700 shared-memory NUMA system dedicated to biomedical research. It comprised 36 blades; each blade helds 2 Itanium2 Montvale 9130M dual-core processors, for a total of 144 cores.
The four cores on each blade shared 8 Gbytes of local memory. The processors were connected by a NUMAlink interconnect. Through this interconnect the local memory on each processor was accessible to all the other processors on the system. Salk ran an enhanced version of the SuSE Linux operating system.
Salk was decommissioned on January 31, 2015.
Warhol was an 8-node Hewlett-Packard BladeSystem c3000. Each node had 2 Intel E5440 quad-core 2.83 GHz processors, for a total of 64 cores on the machine. The 8 cores on a node shared 16 Gbytes of memory. The nodes were interconnected by an InfiniBand communications link. Warhol ran a version of CentOS Linux operating system.
There were multiple frontend nodes, which are also Intel E5440 processors and which run the same version of CentOS Linux as the compute nodes.
Warhol was decommissioned in September 2013.
Pople was an SGI Altix 4700 shared-memory NUMA system comprising 192 blades. Each blade helds 2 Itanium2 Montvale 9130M dual-core processors, for a total of 768 cores. Each core had a clock rate of 1.66 GHz and could perform 4 floating point operations per clock cycle. Thus, the total floating point capability of the machine was 5.1 Tflops.
The four cores on each blade shared 8 Gbytes of local memory. The processors were connected by a NUMAlink interconnect. Through this interconnect the local memory on each processor was accessible to all the other processors on the system. Pople ran an enhanced version of the SuSE Linux operating system.
Pople was decommissioned on August 31, 2011.
Bigben was a Cray MPP system with 2068 compute nodes linked by custom-designed interconnect. Each node contained one dual-core 2.6 GHz AMD Opteron processor (model 285). Each core had its own cache, but the two cores on a node shared 2 Gbytes of memory and a network connection. Nineteen dedicated IO nodes were also connected to this network.
Bigben was primarily intended to run applications with very high levels of parallelism or concurrency (512-4136 cores).
Bigben was decommissioned on March 31, 2010
Ben comprised 64 HP Alphaserver ES40 nodes with a separate front end node. Each computational node contained four 667-MHz processors and runs the Tru64 Unix operating system. A Quadrics interconnection network connected the nodes.
Each node was a 4 processor SMP, with 4 Gbytes of memory.
Rachel and Jonas
rachel.psc.edu and jonas.psc.edu
Rachel and Jonas were decommissioned on July 1, 2008.
Lemieux comprised 610 Compaq Alphaserver ES45 nodes and two separate front end nodes. Each computational node was a 4 processor SMP, with 1-GHz Alpha EV68 processors and 4 Gbytes of memory, running the Tru64 Unix operating system. A dual-rail Quadrics interconnect linked the nodes.
Lemieux was primarily intended to run applications of very high levels of parallelism or concurrency (512 – 2048 processors).
Lemieux was decommissioned on December 22, 2006.
The Sequence Analysis Resource
The PSC Sequence Analysis Resource, an Alphaserver 8400 5/300 system, was decomissioned on May 31, 2006.
The CRAY J90
The last of PSC’s Cray J90s was decommisioned in July 2002.
The CRAY T3E
jaromir.psc.edu was a scalable parallel T3E system. PSC was the first site to install a T3E.
Jaromir was an extremely valuable resource, which enabled many scientific breakthroughs since 1996. However, 8 years is a very long time in the world of supercomputers, and more powerful and more cost-effective platforms are now available. In keeping with its mission of providing cost-effective leading-edge capability to the national scientific community, PSC retired Jaromir on October 1, 2004.
The CRAY C90
The PSC’s CRAY Research, Inc. C90 (or, more correctly, C916/512) ran UNICOS, based on AT&T UNIX System V, with Berkeley extensions and Cray Research, Inc. enhancements.
The C90 was decommissioned on May 31, 1999.
The CRAY T3D
The CRAY T3D system was the first in a series of massively parallel processing (MPP) systems from CRAY Research. T3D’s are tightly coupled to CRAY Y-MP and C90 systems through a high speed channel. PSC’s T3D prototype machine was tightly coupled to the center’s C90, creating a powerful heterogeneous environment.
The PSC T3D was decommissioned on April 30, 1999.
The PSC’s AlphaCluster was a collection of DEC Alpha workstations that offered a supercomputing class resource. With its ability to execute both single- threaded and distributed codes, the Cluster complemented the PSC’s Cray C90 vector machine and massively parallel Cray T3D. Applications suited to a single high-performance scalar machine, or to a loosely coupled set of such machines, were good candidates for the AlphaCluster.
The Cluster was decommisioned in 1998.