The HPCC Program funds the development of applications other than the Grand Challenges. This work goes on in particular at NSF and DOE facilities and at NASA. Examples of these activities are described below. Additional NSF highlights are described at:
Simulation of Chorismate mutase
Chorismate mutase is an enzyme involved in the synthesis of amino acids. Bacteria use it to speed up chemical reactions by a factor of more than a million. Because it does not occur in the human body, inhibitors of this enzyme may be safe antibacterial agents. Using high performance computing at the Cornell Theory center, researchers have discovered strong concentrated lines of electrostatic force leading to the "active site," that is, the spot where molecules are processed. This may provide the secret to the enzyme's potency and lead to improved drug design.
A five-minute high-resolution animation containing images of the complexity shown in the accompanying figure required 40 hours of computing on each of 10 processors of an IBM SP-2 and produced approximately 16 GB of data.
Simulation of chorismate mutase showing lines of electrostatic force.
Simulation of Antibody-Antigen Association
The first stage in molecular recognition in antibody-antigen association is the encounter between the antigen and antibody under the influence of mutual intermolecular forces. Researchers at NCSA have simulated the encounter between the antibody fragment of the monoclonal antibody HyHEL-5 and the antigen, hen-egg lysozyme, accounting accurately for the electrostatic steering and orientational forces. The computational process involves solving two fundamental equations, the Poisson-Boltzmann equation for obtaining realistic electrostatic fields due to the antibody and the diffusional equation for obtaining the probability of encounter between the protein and antibody. The electrostatics calculation requires a large-memory computer, while a massively parallel computer is ideal for computing the large number of trajectories necessary in the diffusional simulations. The overall simulation involved using the Convex C3, the SGI Power Challenge, and the Thinking Machines CM-5 at NCSA. The antibody fragment has over 200 amino acids, while the hen-egg lysozyme is made up of 129 amino acids. Each electrostatics computation of the antibody molecule involved several hours on the Conv ex C3, while the brownian dynamics trajectories on each of the 512-processor partitions on the CM-5 involved an hour.
This simulation yielded for the first time experimentally realistic rates of encounters between antibodies and proteins. Extensions to mutant proteins and antibodies are of value in protein design.
The complex between the fragment of a monoclonal antibody, HyHEL- 5, and hen-egg lysozyme. The key amino acid residues involved in complexation are displayed by large spheres. The negatively charged amino acids are in red and the positively charged ones in blue. The small spheres highlight other charged residues in the antibody fragment and hen-egg lysozyme.
A Realistic Ocean Model
Satellite measurements show ocean levels rising one to three millimeters a year, which raises the question: What are the potential effects for coastal population centers? The uncertainties inherent in a system as complex as the Earth's climate means that the only tool available for assimilating the multitude of variables and trying to make rational predictions is computer modeling.
This year researchers at the Pittsburgh Supercomputing center made a notable step toward a realistic model of ocean circulation. Exploiting the parallel-processing ability of the Cray Research T3D, a model of circulation in the North Atlantic correctly predicted the course of the Gulf Stream. No other circulation model of the entire North Atlantic has been able to do this. This very high resolution simulation ran for 10 days on 256 T3D processors. The results have proved the feasibility of a revised approach to ocean modeling. Prior approaches have been vexed by distortions in the interaction between the warm ocean surface and deeper, colder regions. The new model implements a set of equations that eliminate this spurious heat diffusion. This work is significant for ocean modeling and climate modeling in general, confirming that with sufficient computing capability it is possible to overcome obstacles that have bedeviled this area of research.
Simulation of circulation in the North Atlantic. Color shows temperature, red corresponding to high temperature. In most prior modeling, the Gulf Stream turns left past Cape Hatteras, clinging to the continental shoreline. In this simulation, however, the Gulf Stream veers off from Cape Hatteras on a northeast course into the open Atlantic, following essentially the correct course.
Paradoxically, riblets, or corrugated grooves, have been found to reduce drag when used on aircraft surfaces. Using computational fluid dynamics techniques and the Intel Paragon at the San Diego Supercomputer center, a research group is looking for correlations between the vorticity field of the flow and riblet configurations, in order to develop a generic understanding of controlling wall turbulence. The appropriate use of riblets would allow for better control of drag, and hence more fuel-efficient aircraft. This work is illustrated below.
Simulations on SDSC's Intel Paragon of turbulence over surfaces mounted with streamwise riblets. Computed turbulence intensities indicate that the reduction of fluctuations near the wall with riblets (bottom) results in a six percent drag reduction in this geometry.
The Impact of Turbulence on Weather/Climate Prediction
The primary goal of this National center for Atmospheric Research project is to calculate fluid turbulence under the influences of environmental rotation and variable density at unprecedentedly high resolution by using the most powerful computers. The turbulence models being run under this project can be used to assist in the study of similar fluid motion phenomena in the atmosphere and oceans. This is because most of the volume of the Earth's atmosphere and oceans have a gravitationally stable density stratification, with lighter fluid above heavier. In combination with the planetary rotation, this causes the most energetic motions to occur mainly in horizontal planes. However, when buoyancy fluxes create a gravitationally unstable density stratification, vigorous vertical motions ensue. This convection is common in atmospheric clouds and also in polar regions of the ocean with strong surface cooling. The convection primarily occurs in plumes that carry parcels with anomalous density over long vertical distances within the unstable zone.
This image is a single frame from a volume visualization rendered from a computer model of turbulent fluid flow. The color masses indicate areas of vorticity that have stabilized within the volume after a specified period of time. The colors correspond to potential vorticity, with large positive values being blue, large negative values being red, and values near zero being transparent.
Shoemaker-Levy 9 Collision with Jupiter
Initial computer simulations had shown that the comet -- known as Shoemaker-Levy 9 -- would end quietly, being softly caught by the largest planet in the solar system. But NSF and DOE studies indicated that a large fragment would explode after penetrating about 60 kilometers into the planet's atmosphere creating a plume of superheated debris, shooting hundreds of kilometers above Jupiter's layered clouds. The predictions, which accurately foretold the event, were performed independently on the Pittsburgh Supercomputer center's Cray Research C90 and Sandia's Intel Paragon. The simulations would not have been possible without the large memories now available on these powerful supercomputers. Observers used the Pittsburgh simulations to plan their monitoring of the crash and these simulations now appear to have provided an accurate description of the resulting explosions. In a recent keynote address, comet co-discoverer Eugene Shoemaker said Sandia's simulations had been responsible for alerting astronomers to look for fireball plumes. An animation of the Pittsburgh simulations can be viewed at:
Impact of the comet fragment. Image height corresponds to 1,000 kilometers. Color represents temperature, ranging from tens of thousands of degrees Kelvin (red), several times the temperature of the sun, to hundreds of degrees Kelvin (blue).
Comparison of Hubble Space Telescope (HST) images of the fireball resulting from the Comet Shoemaker-Levy 9 fragment G impact on Jupiter (left) with a Sandia computational simulation (right). The impact-generated fireball and debris plume were imaged over the horizon of Jupiter by HST over the eighteen-minute sequence shown here. Because of the viewing geometry, the lower 400 km of the fireball are beyond and below the horizon of Jupiter, and the lower 1,500 km or so are in Jupiter's shadow from the sunlight. The fireball is incandescent at early times. At late times, the illumination is due only to scattered sunlight. Information about the size of fragment G can be determined only by comparing the simulations with observations. The Sandia simulations for a 3 km- diameter ice fragment are remarkably similar to the HST data but suggest fragment G was somewhat smaller.
Numerical Simulations Answer Questions about Vortex Structure and Dynamics in Superconductors
One of the main obstacles in the development of practical high- temperature superconducting materials is dissipation. Dissipation is caused by the motion of magnetic flux quanta called vortices. The entry and subsequent diffusion of vortices into a superconductor have been the focus of numerous experiments. However, the limited resolution of physical experiments can provide only a rough view of aggregates of vortices.
Numerical simulations on the IBM SP system at DOE's Argonne National Laboratory have provided a promising new approach for studying vortices. Numerical simulations exploit the extraordinary memory and speed of massively parallel computers, thereby attaining the extremely fine temporal and spatial resolution needed to model complex vortex behavior. For the first time, applied mathematicians and computer scientists have identified a superstructure in materials subject to strong current. The superstructure is characterized by regions of slowly varying vortex density separated by stationary fault lines. The simulations on the SP computer also revealed that vortex lattices with misoriented grains gradually "heal" when subject to a weak current. These new discoveries are providing new insights about vortex behavior. More important, the numerical simulations may help researchers find ways to enhance the lattice defects, bringing superconductivity one step closer to practical application. Further details about on-going superconductivity simulation studies (including animations and other advanced visualizations) are at:
Early stages in the formation of a magnetic flux vortex. The figure shows the penetration of a magnetic field into a thin strip of high- Tc superconducting material, which is imbedded in a normal metal, and the formation of a magnetic flux vortex. The red surface is an isosurface for the magnetic induction. The isosurface follows the top and bottom of the superconducting strip (not shown). The field penetrates from the left and right sides. Thermal fluctuations cause "droplets" of magnetic flux to be formed in the interior of the strip. As time progresses, these droplets may coalesce into vortices. One vortex is being spawned from the left sheet of the isosurface. These computations were done on Argonne's IBM SP system.
Molecular Dynamics Modeling
In collaboration with researchers in the Semiconductor Research Consortium (SRC) CRADA researchers are using the SPaSM molecular dynamics (MD) software to model ion-implantation processes. The simulations will be used to obtain more information on the major types of defects generated by ion implantation in silicon and their ability to migrate or diffuse in temperatures ranging from room temperature to approximately 400-500 Kelvin. Today industry is largely using table look-up methods in their process simulators. These methods attempt to include species, energy, dose, angle, rotation, and surface film (such as screen oxides) effects. These effects have been locally calibrated over the range of typical processing conditions. These tables are not easily extensible, limiting the overall range of this method and making it difficult to interpolate or extrapolate. MD simulations promise extensibility to new processing conditions such as multilayer implantation of dopants. They have the potential to generate initial damage profiles that are critical for transient enhanced diffusion (TED) calculations using point defect diffusion models, another factor for the industry. However, this MD approach has been calibrated over a much smaller processing window and is not generally used in industry. Second, such physically-based MD calculations will permit simulations of structures with arbitrary geometries and surface films. Third, these simulation methods promise simulation of new dopants (such as indium) and new targets (such as silicide) combinations without extensive calibration since the physics of the materials will be modeled directly.
The industry and its university collaborators (including the University of California at Berkeley, Stanford University, and the University of Texas at Austin) need this kind of information in order to develop more accurate, physically-based, and computationally efficient models for predicting implant-induced damage of the crystal and the following recrystalization with defects. Atomistic analysis and simulations are needed to support the development of more advanced phenomenological models in the simulator codes used extensively in the semiconductor industry.
MD simulation of a crystal block of 5 million silicon atoms as 11 silicon atoms implanted, each with an energy of 15keV. The simulation exhibits realistic phenomena such as amorphization near the surface and the channeling of some impacting atoms. These snapshots show the atoms displaced from their crystal positions (damaged areas) and the top layer (displayed in gray) at times 92 and 277 femtoseconds (10-15 seconds) after the first impact.
Advanced Computational Research in Crash Simulation
Automobile manufacturers must crash test their new cars to determine if they meet safety and crashworthiness standards for protecting passengers during accidents. The cost of a single car crash test can run from $50,000 to $750,000. The crash is usually performed at the end of the design process when the possibilities for design changes are limited. An alternative to actual testing is simulation that can provide structural information in early design stages. Design for crashworthiness is becoming more important as lighter-weight materials (aluminum alloys, magnesium, and polymer composites) are used in new cars to make them lighter and thus more energy efficient. In studies employing conventional material models, it is not unusual for an analysis to require a week of computer time on current supercomputers.
The Intel Paragon massively parallel systems at DOE's Oak Ridge National Laboratory, which combine hundreds of processors that can operate concurrently on a problem, can deliver the computational power required to perform analyses of complex crash situations in a relatively short time. Using data supplied by the U.S. Department of Transportation, researchers at Oak Ridge have performed computational analyses of a 4-door sedan crashing into a lamppost at 35 miles per hour, and of an offset head-on collision between two cars. In the course of computational analysis, the processors calculated the local deformation and the energy absorbed during a crash for each of 56,000 points, or finite elements. Simulation included detailed physical models such as 3,000 spot welds as well as 248 different structural materials. The analysis time has been reduced from 48 to 8 hours compared to current industry standards, and work is underway to further increase efficiency and incorporate detailed material models and design optimization.
Illustrative of the computing power at the center for Computational Science is the 50 percent offset crash of two Ford Taurus cars moving at 35 mph shown here. The Taurus model is detailed; the results are useful in understanding crash dynamics and their consequences. These results were obtained using parallel DYNA-3D software developed at Oak Ridge. Run times of less than one hour on the most powerful machine are expected.
Massively Parallel Numerical Methods for Advanced Simulation of Chemically Reacting Flows
SALSA is a new three-dimensional massively parallel (MP), chemically-reacting-flow software has been developed at DOE's Sandia National Laboratories. It allows simulations of complex three-dimensional fluid flows with complex reaction kinetics. SALSA has been used to simulate the deposition of a silicon carbide (SiC) mechanism with 19 chemical species undergoing over 40 chemical reactions. This chemical vapor deposition (CVD) process is of interest to a number of U.S. semiconductor companies. Using advanced MP algorithms, over 65 billion operations per second performance has been obtained in the solution phase of the simulation. This is 46 percent of the peak performance of 1,904 processors of Sandia's Intel Paragon and represents a significant increase in computational performance over state-of-the-art chemically-reacting-flow simulations. For this reason, SALSA was chosen as a finalist in the prestigious 1994 Gordon Bell Prize Competition for MP scientific applications. The development of enabling technologies, such as load balancing techniques and MP iterative solution methods, was funded by DOE's Office of Scientific Computing.
This complex chemically-reacting-flow simulation software is important to technology areas of interest to Federal agencies such as DOE and DOD (including ARPA), as well as U.S. industry. These areas include combustion research for transportation, atmospheric chemistry modeling for pollution studies, chemically reacting flow models for analysis and control of manufacturing processes, and CVD process modeling for production of advanced semiconductor materials.
View of fluid streamlines and the center plane temperature distribution in a vertical disk, chemical vapor deposition reactor. Simulations such as these allow designers to produce higher uniformity semiconductor materials by eliminating unwanted detrimental effects such as fluid recirculation.
Convective Turbulence and Mixing in Astrophysics
NASA has developed a new generation of portable production software for hydrodynamics and magnetohydrodynamics (MHD) for use in astrophysics that takes advantage of the large memory and fast speeds of modern parallel systems.
Managed by Goddard Space Flight center (GSFC), the project is using tools provided by Argonne National Laboratory to develop portable codes for distributed memory environments; is working with Argonne computer scientists to refine these tools; is using University of Colorado performance tools to evaluate and improve parallel code efficiency, and to study and improve network communications (including ATM); and is comparing parallel computing strategies by investigating alternative parallelization tools based on shared memory constructs. The project has:
NASA simulation of temperature fluctuations (dark: cool; light: hot) in a layer of convectively unstable gas (upper half) overlying a convectively stable layer (lower half) within the deep interior of a Sun-like star. This simulation was performed on the Argonne IBM SP-1.