A-Z Index | Directory | Careers

CRD’s AMR Methods Accelerate MHD Simulations

May 1, 2004

The annual Supercomputing conference held every November is well known as a sort of international watering hole, where many of the world’s leading experts in high- performance computing gather for a week to take stock of the competition, exchange ideas, and make new connections.

At SC2000 in Dallas, Phil Colella, head of the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory, and Steve Jardin, co-leader of the Computational Plasma Physics Group at Princeton Plasma Physics Laboratory, were both scheduled to give talks in the Berkeley Lab booth. Colella discussed “Adaptive mesh refinement research and software at NERSC,” where Jardin has conducted his scientific computing for years. Jardin gave a presentation on “A Parallel Resistive MHD Program with Application to Magnetic Reconnection.”

The scientist and the mathematician got to talking between their presentations and one thing led to another. They began an informal collaboration which was soon formalized under the auspices of SciDAC—Jardin is principal investigator for the Center for Extended Magnetohydrodynamic Modeling (CEMM), while Colella is PI for the Applied Partial Differential Equations Center (APDEC). Jardin’s group was able to incorporate the CHOMBO adaptive mesh refinement (AMR) code developed by Colella’s group into a new fusion simulation code, which is now called the Princeton AMRMHD code. “Using the AMR code resulted in a 30 times improvement over what we would have had with a uniform mesh code of the highest resolution,” Jardin says.

The AMRMHD code, developed in conjunction with Princeton researcher Ravi Samtaney and Berkeley Lab researcher Terry Ligocki, is already producing new physics results as well. It powered the first simulation demonstrating that the presence of a magnetic field will suppress the growth of the Richtmyer-Meshkov instability when a shock wave interacts with a contact discontinuity separating ionized gases of different densities. The upper and lower images in Figure 3 contrast the interface without (upper) and with (lower) the magnetic field. In the presence of the field, the vorticity generated at the interface is transported away by the fast and slow MHD shocks, removing the driver of the instability. Results are shown for an effective mesh of 16,384 Χ 2,048 points which took approximately 150 hours to run on 64 processors of Seaborg—25 times faster than a non-AMR code.

Another new physical effect discovered by the AMRMHD code is current bunching and ejection during magnetic reconnection (Figure 4). Magnetic reconnection refers to the breaking and reconnecting of oppositely directed magnetic field lines in a plasma. In the process, magnetic field energy is converted to plasma kinetic and thermal energy.

The CEMM project has been collaborating with other SciDAC software centers in addition to APDEC. For example, in a collaboration that predated SciDAC, the group developing the M3D code was using PETSc, a portable toolkit of sparse solvers distributed as part of the ACTS Collection of DOE-developed software tools. Also in the ACTS Collection is Hypre, a library of preconditioners that can be used in conjunction with PETSc. Under SciDAC, the Terascale Optimal PDE Solvers (TOPS) Center worked with CEMM to add Hypre underneath the same code interface that M3D was already using to call the PETSc solvers. The combined PETSc-Hypre solver library allows M3D to solve its linear systems two to three time faster than before.

According to Jardin, fusion plasma models are continually being improved both by more complete descriptions of the physical processes, and by more efficient algorithms, such as those provided by PETSc and CHOMBO. Advances such as these have complemented increases in computer hardware speeds to provide a capability today that is vastly improved over what was possible 30 years ago (Figure 5). This rate of increase of effective capability is essential to meet the anticipated modeling demands of fusion energy research, Jardin says.

“Presently, we can apply our most complete computational models to realistically simulate both nonlinear macroscopic stability and microscopic turbulent transport in the smaller fusion experiments that exist today, at least for short times,” Jardin says. “Anticipated increases in both hardware and algorithms during the next five to ten years will enable application of even more advanced models to the largest present-day experiments and to the proposed burning plasma experiments such as ITER [the International Thermonuclear Experimental Reactor].”


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.