A-Z Index | Directory | Careers

Science at Scale: SciDAC Astrophysics Code Scales to Over 200K Processors

June 18, 2010

Media Contact: Jon Bashor, Scientific Contact: Ann Almgren
Contact: cscomms@lbl.gov

Color graph

CASTRO scaling results on jaguarpf superimposed on a picture of nucleosynthesis during a Type Ia supernova explosion. A weak scaling approach was used in which the number of processors increases by the same factor as the number of unknowns in the problem. The red curve represents a single level of refinement; the blue and green curves are multilevel simulations with 12.5 percent of the domain refined. With perfect scaling the curves would be flat.

Performing high-resolution, high-fidelity, three-dimensional simulations of Type Ia supernovae, the largest thermonuclear explosions in the universe, requires not only algorithms that accurately represent the correct physics, but also codes that effectively harness the resources of the next generation of the most powerful supercomputers.

Through the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC), Lawrence Berkeley National Laboratory's Center for Computational Sciences and Engineering (CCSE) has developed two codes that can do just that.

MAESTRO, a low Mach number code for studying the pre-ignition phase of Type Ia supernovae, as well as other stellar convective phenomena, has just been demonstrated to scale to almost 100,000 processors on the Cray XT5 supercomputer "Jaguar" at the Oak Ridge Leadership Computing Facility. And CASTRO, a general compressible astrophysics radiation/ hydrodynamics code which handles the explosion itself, now scales to over 200,000 processors on Jaguar—almost the entire machine. Both scaling studies simulated a pre-explosion white dwarf with a realistic stellar equation of state and self-gravity.

Color graph

MAESTRO scaling results on jaguarpf superimposed on a picture of radial velocity in a white dwarf before ignition. A weak scaling approach was also used here, for a multilevel problem with 12.5 percent of the domain refined. While the overall scaling behavior is not as close to ideal as that of CASTRO because of the linear solves performed each time step, MAESTRO is able to take a much larger time step than CASTRO for flows in which the velocity is a fraction of the speed of sound, enabling the longer integration times needed to study convection.

These and further results will be presented at the 2010 annual SciDAC conference to be held July 11-15 in Chattanooga, Tennessee.

Both CASTRO and MAESTRO are structured grid codes with adaptive mesh refinement (AMR), which focuses spatial resolution on particular regions of the domain. AMR can be used in CASTRO to follow the flame front as it evolves in time, for example, or in MAESTRO to zoom in on the center of the star where ignition is most likely to occur.

Like many other structured grid AMR codes, CASTRO and MAESTRO use a nested hierarchy of rectangular grids. This grid structure lends itself naturally to a hybrid OpenMP/MPI parallelization strategy. At each time step the grid patches are distributed to nodes, and MPI is used to communicate between the nodes. OpenMP is used to allow multiple cores on a node to work on the same patch of data. A dynamic load-balancing technique is used to adjust the load.

Using the low Mach number approach, the time step in MAESTRO is controlled by the fluid velocity instead of the sound speed, allowing a much larger time step than would be taken with a compressible code. This enables researchers to evolve the white dwarf for hours instead of seconds of physical time, thus allowing them to study the convection leading up to ignition. MAESTRO was developed in collaboration with astrophysicist Mike Zingale of Stony Brook University, and in addition to the SNe Ia research, is being used to study convection in massive stars, X-ray bursts, and classical novae.

MAESTRO and CASTRO share a common software framework. Soon, scientists will be able to initialize a CASTRO simulation with data mapped from a MAESTRO simulation, thus enabling them to study SNe Ia from end to end, taking advantage of the accuracy and efficiency of each approach as appropriate.

For more information about MAESTRO, please read:
Berkeley Lab Scientists' Computer Code Gives Astrophysicists First Full Simulation of Star's Final Hours


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.