In Progress, 2006
- Chips to Go
- Hercules Does Northridge
- Ketchup on the Grid
- Stroke Busters
- Step Up to the BAR Domain
- Why Giant Planets Wear Stripes
A snapshot from the MD/QM simulations of simoX shows oxygen atoms (yellow circles) injected perpendicular to a silicon surface of 110,000 atoms. Each of five QM regions of silicon (red) penetrated by oxygen expands and changes shape tracking the migration of the oxygen and the breaking and reforming of silicon-silicon bonds.
From cell phones to laptops, integrated circuits — “chips” — that run fast on low power are the brains of our mobile electronic culture. SIMOX (separation by implantation of oxygen) is the favored method for sculpting circuits in SOI (silicon over insulator) chips — the usual choice for portable, battery-powered devices.
With energized beams of oxygen, SIMOX oxidizes a thin layer of silicon deep inside a chip. For further advances in chip design, the oxygen beams need to be able to sculpt thinner layers, from 10 microns currently to 10 nanometers, 1,000 times thinner, which means using 1,000 times lower energy. This requires atomic-level understanding of how oxygen atoms migrate through silicon at these low energies.
Quantum-level simulations can provide this knowledge, but they demand extraordinary amounts of computation. To harness the necessary resources, researchers from Advanced Industrial Science and Technology (AIST) and the Nagoya Institute of Technology in Japan teamed with materials scientists at the University of Southern California (USC) to create a U.S.-Japan grid testbed. Led by Yoshio Tanaka of AIST and Aiichiro Nakano of USC, they created a grid-enabled framework that allowed them to do quantum-mechanical (QM) simulation of oxygen atoms moving through silicon. The QM models were embedded in classical molecular dynamics (MD) simulations for the surrounding silicon.
They used TeraGrid systems at NCSA and SDSC along with clusters at AIST and relied heavily on PSC’s LeMieux. The simulation ran in total for 150,000 processor hours. Results reveal that the depth of oxygen penetration depends strongly and critically on the incident position of the oxygen beam.
This snapshot from QuakeShow during an earthquake simulation shows amplified seismic waves in the San Fernando Valley.
A major challenge with large-scale parallel simulations, as with PSC's Cray XT3, involving thousands of processors, is how to visualize and interpret huge quantities of data. The Quake Group led by Carnegie Mellon civil and computational engineer Jacobo Bielak and computer scientist David O'Hallaron have been working with PSC on earthquake modeling for over a decade, along with University of Texas computational geoscientist Omar Ghattas. Early this year, their student, Tiankai Tu, along with Hongfeng Yu (advised by Kwan-Liu Ma at UC Davis) led a collaboration with PSC staff to develop a new end-to-end simulation system called Hercules that incorporates the components of earthquake simulation into a unified framework — building the mesh, partitioning the job among processors, the simulation itself, and visualization of results. With this approach, it has become possible to visualize the simulation in real time.
Hercules relies on PDIO (Portals Direct I/O), software developed by PSC staff, that supports run-time remote interaction with a parallel program on the Cray XT3. PDIO routes data between the Hercules simulation and a remote laptop/desktop running QuakeShow, a visualization program that makes it possible to change view angles, zoom in or out and other operations — while the simulation is running.
In August, the Quake team applied Hercules to simulate the 1994 Northridge earthquake. Using 1,024 XT3 processors, they simulated 80 seconds of earthquake with an adaptive mesh of 9.9 million elements. Real-time visualization every 10 time steps allowed the researchers to view difficult-to-observe physical phenomena. “We were able to see strong concentrations of seismic energy in both the San Fernando Valley and the Los Angeles Basin,” says Bielak, “while seismic waves in the nearby Santa Monica Mountains and San Gabriel Mountains had dissipated — a validation that sedimentary basins trap seismic energy during strong earthquakes.”
The researchers plan to use Hercules with the XT3 to simulate a magnitude 7.7 scenario quake along the San Andreas fault in a large region of Southern California at higher frequencies, requiring much more computation. Higher vibration frequencies are important because they present the greatest danger to most city structures.
Two gyroid domains with a domain wall running vertically between them. The gyroid phase forms in a mixture of oil (red), water (blue) & surfactant. The image shows opposite handedness of the two domains.
Turn a bottle of ketchup upside down. It’s a liquid and therefore should pour, right? New scientific findings from a large-scale transatlantic project may not make ketchup flow smoothly, but they describe never-before-known details of liquid mixtures, such as ketchup, that act like solids.
Led by theoretical chemist Peter Coveney of University College London and Bruce Boghosian of Tufts University, and jointly funded by the National Science Foundation and the UK’s Engineering and Physical Sciences Research Council, the project used resources at four TeraGrid sites — PSC, NCSA, SDSC and UC/Argonne — along with U.K. resources at Daresbury Lab and Manchester. The researchers steered multi-site simulations to zero-in on a defect phase of the gyroid, and then used LeMieux, PSC’s terascale system, for a very large-scale simulation of this defect phase. The results show formation and evolution of realistic “gyroid” systems, structures that are a fascinating hybrid of liquid and solid physical features. Such systems are widespread both in living organisms — where they are thought to feature in certain lipid structures — and in the electronics display industry, where there’s much interest in understanding how they operate.
Published in June 2006 by the Royal Society, the U.K.’s national academy of science, the findings present the first analysis of gyroid self-assembly and defect dynamics. “Liquid-crystal systems are scientifically important, and this study was possible only because multisite grid resources were available,” says Coveney, who with his colleague Jonathan Chin, authored the Royal Society paper. “We’re in a region where nobody has studied these properties until now and we’ve been able with computation to make predictions and invite experimentalists to take a look.”
Instantaneous streamlines captured during the systolic phase of the cardiac cycle. The swirling pattern downstream of the stenosis depicts the presence and degree of turbulence.
(Graphic rendered by Greg Foss, PSC)
Probably the main cause of strokes is arterial narrowing — stenosis — due to atherosclerosis in the carotid artery, the main thoroughfare for blood to the brain. Plaque that builds up in a stenosed region can break loose and block an artery downstream, a problem that’s most dangerous in the internal carotid artery, just past where the carotid splits into two branches.
A predictor of the possible risk of plaque breakup is shear stress produced by blood flow in the stenosed region. Experiments show that such flows can become turbulent downstream of severely narrowed regions. To gather insight into this phenomenon and obtain more accurate data than is possible from experiment, a team led by George Karniadakis of Brown University has undertaken direct numerical simulation of realistic 3D flow through a stenosed artery. For these demanding computations, they applied Nektar, a high-order spectral element code developed at Brown, with a realistic 3D geometry of a stenosed carotid artery reconstructed from MRI. They ran simulations on TeraGrid resources at NCSA and on PSC’s Cray XT3, using up to 196 processors.
“We discovered a transition to turbulence during the systole phase,” says Karniadakis. This transition occurs downstream from the throat where the internal carotid artery narrows. This turbulence decays downstream, and also decays during the diastolic phase. The simulations show that shear stress on the artery wall increases during this turbulent phase. With the availability of more powerful computation, such as anticipated petascale systems, this information could become useful to help predict patients at high risk for stroke.
This closeup of the mid-section of the 50-nanometer length of membrane after 27 nanoseconds of simulation shows the BAR domain (orange & yellow helices) molded to the membrane surface.
University of Utah chemist Gregory Voth and grad student Phil Blood are using PSC’s Cray XT3 to tackle a basic question of endocytosis — the life-sustaining process by which cells absorb material from outside the cell by bending their membrane to form a “vesicle” and engulf it. All animal cells depend on endocytosis, which involves various steps, but begins with curvature of the membrane.
BAR domains are a family of banana-shaped proteins shown to bind to cellular membrane as it curves. Experiments suggest that BAR domains mold their concave surface to a section of membrane and induce a corresponding curvature. Voth and Blood undertook molecular dynamics simulations to look more closely. With the XT3 they’ve been able to run efficiently, using software called NAMD, with as many as 1,024 processors. “The XT3 has been amazing,” says Blood. “We haven’t found a hard limit on scaling up the number of processors.”
They used TeraGrid systems at SDSC, NCSA and University of Chicago/Argonne to construct a model and to explore how long a stretch of membrane they needed for curvature to occur. Their final simulations used the XT3 to include the protein with a 50-nanometer length of membrane—probably the longest patch of membrane ever simulated—for a total of 738,000 atoms. Their results, reported in Proceedings of the National Academy of Sciences (2006), show that the orientation of the BAR domain as it attaches to the membrane determines the degree of curvature.
Results from Glatzmaier’s 3D simulation represent banded zonal flow on a giant planet’s surface, with eastward flow (red and yellow) contrasted with westward (blue). A snapshot of the generated magnetic field shows lines of force outward (orange) and inward (blue).
Wind in the upper atmosphere of giant planets like Jupiter and Saturn blows both ways, west and east — shifting with latitude in banded zones. Known as differential rotation, this phenomenon has stirred much interest and several theories, but scientists have yet to settle on a conclusive explanation for how these banded wind patterns are maintained or how deep below the surface they extend.
Like the Sun, a giant gaseous planet has a fluid interior that transports heat outward by turbulent convective motions. Because of this thermal convection and effects of the planet’s 10-hour rotation period, the fluid interior, instead of rotating as a solid body, rotates at different rates at different latitudes and depths. The resulting shear flow of the electrically conducting fluid generates a magnetic field. In 1995, running at PSC, Gary Glatzmaier of the University of California, Santa Cruz, used a computational model he developed to produce the first self-consistent simulation of convection and magnetic-field generation in Earth's fluid core. With a modified version of that code, he is now simulating convection, differential rotation and magnetic-field generation in giant planets.
In recent work, Glatzmaier and graduate students Martha Evonuk and Tamara Rogers propose a mechanism more robust than previous theories for differential rotation in the interiors and atmospheres of giant planets. Previous models have neglected the effects of the large variation of density with depth and how this generates vorticity as rising fluid expands and sinking fluid contracts. Simulations of the newly proposed mechanism, using LeMieux, PSC’s terascale system, demonstrate how it can maintain differential rotation and a global magnetic field similar to those observed on the surfaces of Jupiter and Saturn.