A-Z Index | Directory | Careers

NERSC Uses Stimulus Funds to Overcome Software Challenges for Scientific Computing

October 30, 2009

Contacts: Linda Vu, lvu@lbl.gov, 510-495-2402

[image class="right" src="/assets/Images/News/2009/8-NERSC-Uses-Stimulus-Funds-to-Overcome-Software-Challenges-for-Scientific-Computing/d86e7ebaa4/franklin-16b.jpg" alt=""Franklin" NERSC's Cray XT4 system" title=""Franklin" NERSC's Cray XT4 system" width="350" height="244" id="856"]

"Franklin" NERSC's Cray XT4 system

A "multi-core" revolution is occurring in computer chip technology. No longer able to sustain the previous growth period where processor speed was continually increasing, chip manufacturers are instead producing multi-core architectures that pack increasing numbers of cores onto the chip. In the arena of high performance scientific computing, this revolution is forcing programmers to rethink the basic models of algorithm development, as well as parallel programming from both the language and parallel decomposition process.

To ensure that science effectively harnesses this new technology, the Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) is receiving $3.125 million in stimulus funds over the next two years from the American Recovery and Reinvestment Act to develop the Computational Science and Engineering Petascale Initiative. As part of this program, NERSC will hire eight post-doctoral researchers to help design and modify modeling codes in key research areas such as energy technologies, fusion and biosciences, to run on emerging many-core systems.

"Emerging multi-core and other heterogeneous architectures provide opportunity for major increases in computational power over the next several years. Designing effective programming models and new strategies will be the key to achieving next-generation petascale computing," says Alice Koniges of NERSC's Science-Driven System Architecture Team, who will be heading the initiative.

The introduction of multi-core chips as well as other new hardware designs such as those built on graphics processors, or GPUs, brings a new heterogeneity to the high performance computing community. While during the past 10–15 years, scientists were able to make amazing progress with parallel message passing models that allowed them to harness many thousands of processors, the new architectures have distinct differences in memory capacity and hierarchies that are causing researchers to consider new programming models and languages as a means to effectively exploit the power afforded by millions of cores.

Every year, NERSC provides computing resources to more than 3,000 DOE-supported scientists, who are developing new materials, modeling climate, investigating protein structures and conducting research in a host of other scientific endeavors. As the DOE Office of Science's primary scientific computing facility, the center is also tasked with helping these users adjust their codes to keep up with new trends in high performance computing. Currently, this includes overcoming the multi-core revolution challenges.

Hired on a two-year term assignment, each of the initiative's post-doctoral researchers will work with key NERSC users to create new procedures for making their current algorithms suitable for very large numbers of processors, develop new algorithms to replace ones that do not scale, and introduce new language constructs that are suitable for many-core platforms. The researchers will also develop a framework for enabling a broader scientific community to take advantage of multi-core performance.

The projects selected to participate in this collaboration will come from the DOE's Office of Advanced Scientific Computing Research's (ASCR) Leadership Computing Challenge Program, which dedicates a large percentage of NERSC's computing resources to high-risk, high-payoff simulations in areas directly related to the DOE’s energy mission, for national emergencies, or for broadening the community of researchers capable of using leadership computing resources.

"An integral part of this initiative is collaboration. Because we are hosting this program at NERSC, the post-docs can leverage the expertise of scientists and engineers running a world-class supercomputing facility, as well as members of the Berkeley Lab's Computational Research Division, who have extensive experience in creating computational tools and techniques for a wide range of science disciplines," says Koniges.


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.