A-Z Index | Phone Book | Careers

Science at Scale: SciDAC Astrophysics Code Scales to Over 200K Processors

June 18, 2010

Media Contact: Jon Bashor, JBashor@lbl.gov, 510-486-5849
Scientific Contact: Ann Almgren, ASAlmgren@lbl.gov.

CASTRO scaling results on jaguarpf superimposed on a picture of nucleosynthesis

CASTRO scaling results on jaguarpf superimposed on a picture of nucleosynthesis during a Type Ia supernova explosion. A weak scaling approach was used in which the number of processors increases by the same factor as the number of unknowns in the problem. The red curve represents a single level of refinement; the blue and green curves are multilevel simulations with 12.5 percent of the domain refined. With perfect scaling the curves would be flat.

Performing high-resolution, high-fidelity, three-dimensional simulations of Type Ia supernovae, the largest thermonuclear explosions in the universe, requires not only algorithms that accurately represent the correct physics, but also codes that effectively harness the resources of the next generation of the most powerful supercomputers.

Through the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC), Lawrence Berkeley National Laboratory's Center for Computational Sciences and Engineering (CCSE) has developed two codes that can do just that.

MAESTRO, a low Mach number code for studying the pre-ignition phase of Type Ia supernovae, as well as other stellar convective phenomena, has just been demonstrated to scale to almost 100,000 processors on the Cray XT5 supercomputer "Jaguar" at the Oak Ridge Leadership Computing Facility. And CASTRO, a general compressible astrophysics radiation/ hydrodynamics code which handles the explosion itself, now scales to over 200,000 processors on Jaguar—almost the entire machine. Both scaling studies simulated a pre-explosion white dwarf with a realistic stellar equation of state and self-gravity.

MAESTRO scaling results on jaguarpf superimposed on a picture of radial velocity

MAESTRO scaling results on jaguarpf superimposed on a picture of radial velocity in a white dwarf before ignition. A weak scaling approach was also used here, for a multilevel problem with 12.5 percent of the domain refined. While the overall scaling behavior is not as close to ideal as that of CASTRO because of the linear solves performed each time step, MAESTRO is able to take a much larger time step than CASTRO for flows in which the velocity is a fraction of the speed of sound, enabling the longer integration times needed to study convection.

These and further results will be presented at the 2010 annual SciDAC conference to be held July 11-15 in Chattanooga, Tennessee.

Both CASTRO and MAESTRO are structured grid codes with adaptive mesh refinement (AMR), which focuses spatial resolution on particular regions of the domain. AMR can be used in CASTRO to follow the flame front as it evolves in time, for example, or in MAESTRO to zoom in on the center of the star where ignition is most likely to occur.

Like many other structured grid AMR codes, CASTRO and MAESTRO use a nested hierarchy of rectangular grids. This grid structure lends itself naturally to a hybrid OpenMP/MPI parallelization strategy. At each time step the grid patches are distributed to nodes, and MPI is used to communicate between the nodes. OpenMP is used to allow multiple cores on a node to work on the same patch of data. A dynamic load-balancing technique is used to adjust the load.

Using the low Mach number approach, the time step in MAESTRO is controlled by the fluid velocity instead of the sound speed, allowing a much larger time step than would be taken with a compressible code. This enables researchers to evolve the white dwarf for hours instead of seconds of physical time, thus allowing them to study the convection leading up to ignition. MAESTRO was developed in collaboration with astrophysicist Mike Zingale of Stony Brook University, and in addition to the SNe Ia research, is being used to study convection in massive stars, X-ray bursts, and classical novae.

MAESTRO and CASTRO share a common software framework. Soon, scientists will be able to initialize a CASTRO simulation with data mapped from a MAESTRO simulation, thus enabling them to study SNe Ia from end to end, taking advantage of the accuracy and efficiency of each approach as appropriate.

For more information about MAESTRO, please read:
Berkeley Lab Scientists' Computer Code Gives Astrophysicists First Full Simulation of Star's Final Hours


About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.