Seattle From realistic predictive modeling of natural and man-made disasters to atomic-level explorations of photosynthetic bacteria, supercomputers are enabling next-generation applications in science and technology. Increasingly, the machines' modeling muscle is even being applied to the design of consumer products. Supercomputing 2005, held here last week, reported on achievements and trends in the field.
Simulations remain the most important supercomputer applications. At SC/05, researchers discussed the use of supercomputer simulations to forecast the course of disasters, such as modeling wave heights or predicting the drift of a plume from a so-called dirty bomb. But the era of big-application supercomputer simulations is giving way to small ones, said Thomas Lange, director of modeling and simulation for Procter & Gamble.
According to Lange, there is an inverse relationship between the cost per use of consumer products and the complexity of the production systems needed to make them: The lower a product's cost per use, the greater the need for supercomputer modeling and simulations. Lange told his SC/05 audience that supercomputers have become critical to designing and perfecting the performance of innovative consumer products before the products are released to the public.
Learning from A-bomb
Procter & Gamble enlisted the big-simulation expertise of atomic-bomb makers at Los Alamos National Laboratory to design its modeling and simulation application, a software suite dubbed PowerFactoRE. The company's use of the suite has deepened its understanding of how consumer products function and what processes most affect their performance, Lange said. PowerFactoRE received an R&D 100 Award from the Department of Energy. P&G now sells the software to other consumer companies for use in product performance analysis.
Since the 1992 ban on nuclear testing, the most strategically important modeling application for supercomputers has been simulation of atomic explosions. Indeed, such work has become the "linchpin of the nuclear enterprise," said conference speaker Dimitri Kusnezov, acting deputy administrator for the National Nuclear Security Administration.
NNSA runs three of the world's fastest supercomputers for its Stockpile Stewardship Program: the Purple platform, developed with IBM Corp. and Lawrence Livermore National Laboratory under the Advanced Simulation and Computing Program of the NNSA and the Department of Energy; BlueGene/L, developed with Livermore and IBM; and the Red Storm platform, developed with Sandia National Laboratories and Cray Inc.
Last week's conference continued a common practice in the supercomputing world: the smashing of records. Bragging rights to the world's largest full-electron calculation were claimed by a team that said it had successfully simulated every atom and every electron in the photosynthetic reaction center of the Rhodopseudomonas viridis bacterium.
The National Institute of Advanced Industrial Science and Technology (Tsukuba, Japan) said it had simulated the complete electrical behavior of the reaction center's 20,581 atoms and 77,754 electrons using the fragment molecular orbital method on a cluster computer with 600 microprocessors, in a calculation that took 72.5 hours.
According to the team's paper, the previous world's record was a density functional-theory calculation on a system of 4,716 atoms that had taken 17 days to run on a 32-processor supercomputer.
NASA, meanwhile, announced it had crafted the first complete simulation of a space shuttle flight from liftoff to re-entry, a feat achieved using its year-old Columbia supercomputer. The key, NASA said, was running two aerodynamic simulation packages a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver and a fully automated inviscid-flow package for cut-cell Cartesian grids in parallel. One simulation technique alone could not have provided a complete characterization, according to the NASA presenters, but the two running in parallel enabled a complete performance simulation over the entire flight envelope, including detailed parametric analysis, with emphasis placed on the most critical stages of the flight.
The 10,240-processor Columbia supercomputer is housed in NASA's Advanced Supercomputing facility at the Ames Research Center (Mountain View, Calif.). The machine is built from 20 Silicon Graphics Inc. Altix 3700 computers, each housing 512 Intel Itanium-2 processors, to achieve performance of 42.7 trillion calculations per second (teraflops) running Linpack benchmarks.
NASA reported having achieved good scalability in its simulation, using up to 2,016 processors or four of the supercomputer's 20 Altix nodes in the Columbia supercomputer. The NASA team said it is now working on extending its simulation to all 20 nodes.
While simulating a shuttle flight or a bacterial structure proceeds largely from known, fixed inputs, simulating the airborne drift of a contaminant plume depends on random inputs of incomplete and unreliable data. Simulation of a plume from a toxic contaminant requires a dynamic data-driven (DDD) approach to make sense of first responders' reports, which are subject to error, according to a team under the auspices of Sandia.
DDD supercomputer simulations must cope with inverse calculations, which involve deducing unknown start points from sparse observations of consequences. The team reported that its trial simulations could be updated every 29 minutes and that even superdetailed inverse calculations with 135 million parameters corresponding to 139 billion total unknowns were solved in less than five hours on 1,024 Hewlett-Packard Co. Alphaserver EV68 processors.
Thus, the Sandia team concluded that dynamic data-driven simulations of the path of airborne contaminants would be possible, albeit computationally costly.