CM-2

The Connection Machine CM-2 was the first major component for PSC’s plans for heterogeneous computing.  With 32,000 separate processing units, the CM-2 was a “massively parallel” computer, in a sense the antithesis of the Cray Y-MP, which had 8 extremely powerful processors.  Each CM-2 processor was less powerful than a personal computer, but for appropriate problems it attained supercomputer performance via the team approach: All 32,000 processors could compute simultaneously, working on independent segments of the job.

A high-speed link connected the CM-2 and the Y-MP, allowing users to divide tasks between the machines, taking advantage of the strengths of each for appropriate parts of their research problems.

 The CM-2 was in use from 1990 – 1992.

Research

 

Looking for Black Gold: Using a Massively Parallel Computer for Reservoir Simulation

Ernest Chung, Chevron Oil Field Research Co.

Typically, oil is found in the minute pores of sandstone and limestone.  To maximize yields, oil companies such as Chevron must pinpoint the elusive stuff, and for that task, supercomputer-generated simulations have proven their usefulness.  Today, in the United States, oil companies use simulations primarily to target existing reservoirs, because looking for oil is expensive and environmental standards have become more stringent.

 

Shared Assignment: A Distributed Computing Solution to the Assignment Problem

Gregory J. McRae, Carnegie Mellon University

McRae and Clay used the HiPPI link to do the computations for the “assignment problem,” a classic and important problem in the field of practical mathematics usually called combinatorial optimization or combinatorial theory.  Their code in effect uses the two supercomputers as if they were one linked system passing data back and forth between the CRAY’s vector processing units and the CM-2’s massively parallel array of processors.  The advantage is that inherently serial parts of the problem can run on the CRAY and other parts of the problem, amenable to parallel solution, can exploit the massively parallel architecture of the CM-2.

Quark Soup: Simulating Hadron Thermodynamics with Massively-Parallel Computing The High-Temperature Quantum Chromodynamics Collaboration

Robert Sugar, University of California, Santa Barbara

The challenge of quantum chromodynamics, QCD as it is known in the trade–is the theory of the strongest force in nature, the force that holds the nucleus of an atom together.  Protons and neutrons, says QCD, are bundles of quarks, three in each bundle.  The job of weaving quarks into these webs of energy we call matter is carried out by particles called gluons–because the act like the strongest imaginable glue.

Parting the Waters: Distributing Molecular Dynamics Calculations Across Two Supercomputers

Charles Brooks and William Young, Carnegie Mellon University

A limiting factor for protein molecular dynamics has been the large amount of computing needed to account for the water molecules that surround a protein in its cellular environment and influence its shape.  Even a relatively small protein (100-150 amino acids) requires about 5,000 water molecules to enclose its folded structure.  To examine the folding process could easily take more than 16,000 water molecules.  Brooks and Young hit on the idea of breaking out the part of the computing that involves only water molecules and giving it to the CM-2.