Sea surface temperature from a multi-year simulation with the UCLA coupled atmosphere-ocean model. The atmospheric component of the model is the UCLA General Circulation Model and the oceanic component is the GFDL Modular Ocean Model. The simulation was performed on the Cray Y-MP at the San Diego Supercomputer center.
Understanding the Earth's climate system and its trends is one of the most challenging problems facing the scientific community today. Better understanding of our climate system is critical for the Nation as it prepares for the 21st century. The Earth's atmosphere-ocean system and the physical laws that control its behavior are complex and contain subtle details. This system is only crudely represented by the most comprehensive of present-day climate models. Improvements in the computational modeling of many of the component physical processes, such as cloud-radiation interaction, will require long-term effort. Climate model improvements will necessitate a hundred to thousand-fold increase in computing, communications, and data management capabilities before these goals can be met. In addition, large increases in model detail and sophistication are necessary for regional climate change forecasting.
Under the sponsorship of the Federal HPCC Initiative and other programs, new computing and communications resources are being developed for climate research. Current state-of-the-art climate models are being redesigned to execute efficiently on promising new scalable architecture systems. Scientists are also investigating distributed computing strategies that will use gigabit-per-second data transfer between distant supercomputers.
The coupled atmosphere-ocean model is the primary tool by which climate scientists simulate the behavior of the Earth's climate system. As a first step in porting a fully coupled model to massively parallel processor systems, scientists at several sites are redesigning separate atmosphere and ocean model elements for execution on scalable parallel systems.
A parallel ocean model, referred to as the Parallel Ocean Program (POP), has demonstrated gigaflop performance during early data- parallel experiments on the 1,024-node Thinking Machines CM-5 at DOE's High Performance Computing Research center of Los Alamos National Laboratory (LANL). This research effort is being funded under DOE's Computer Hardware Advanced Mathematics and Model Physics (CHAMMP) program. LANL scientists collaborate on the project with scientists at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL), the Naval Postgraduate School (NPS), and NSF's National center for Atmospheric Research (NCAR).
Over the past decade, the U.S. academic community has made extensive use of NCAR's Community Climate Model (CCM) for global climate research. Recently, researchers at two DOE laboratories, Argonne National Laboratory and Oak Ridge National Laboratory, also funded under the CHAMMP program, have been collaborating with NCAR scientists to develop parallel versions of the latest CCM code to run on next generation massively parallel systems. This model, referred to as the Parallel Community Climate Model 2 (PCCM2), is now demonstrating near gigaflop performance on the 512-processor Intel Touchstone Delta. A data parallel version for the Thinking Machines CM-5 is also being prepared. In ongoing investigations, researchers are exploring new parallel algorithms designed to improve performance on message passing systems and incorporating new numerical methods that will improve climate simulations.
Scientists at the Lawrence Livermore National Laboratory (LLNL) are simultaneously developing a message passing version of the models described above. These versions treat the inherent parallelism of climate problems in a different fashion and can be executed on the class of parallel systems built around the distributed memory architectural approach.
A high-resolution atmospheric general circulation model, known as SKYHI, has been developed and used by scientists at NOAA's GFDL to investigate stratospheric circulation and ozone depletion for a number of years. The ozone depletion study was presented in "Grand Challenges 1993" and is an example of some of the scientific results obtained from this model. GFDL scientists are now working with LANL scientists to reconstruct the programs that make up the SKYHI model in order to execute it on massively parallel systems that support data parallel and message passing programming paradigms.
These various model development efforts all have the same ultimate objective -- to develop a coupled atmospheric-ocean climate model that will execute efficiently on scalable parallel computers.
Just as demand for more computational resources is growing within the climate research community, so also is the need to distribute the computing load among different computers, both at one site and across the country when appropriate. With this objective in mind, researchers at the University of California, Los Angeles (UCLA) are collaborating with scientists at NSF's San Diego Supercomputer center (SDSC), the California Institute of Technology (Caltech), and NASA's Jet Propulsion Laboratory (JPL) to demonstrate the feasibility of distributed supercomputing of a coupled climate model. In this research, separate components of the atmosphere and ocean codes in a single climate computation are distributed over multiple supercomputers connected via a high speed network. Initial prototype experiments have been performed between Cray Research Y-MP systems at SDSC and NCAR, connected by a 1.5- megabit-per- second T-1 data link. The imminent availability of a gigabit-per- second network under the CASA gigabit testbed project will soon allow for a much more advanced communications capability. Plans then call for computations to be distributed between a Cray Y-MP at either SDSC or JPL and an Intel supercomputer at either SDSC or Caltech. Such distributed computing experiments provide realistic tests for the gigabit-per-second connections that are anticipated on the NREN of the future.