A-Z Index | Phone Book | Careers

InTheLoop | 09.28.2009

The weekly newsletter for Berkeley Lab Computing Sciences

September 28, 2009

CCSE’s Code Yields First Full Simulation of Star’s Final Hours

The precise conditions inside a white dwarf star in the hours leading up to its explosive end as a Type Ia supernova are one of the mysteries confronting astrophysicists studying these massive stellar explosions. But now, a team of researchers, including Ann Almgren, John Bell and Andy Nonaka of the Center for Computational Sciences and Engineering, has created the first full-star simulation of the hours preceding the largest thermonuclear explosions in the universe.

In a paper published in the October issue of Astrophysical Journal, the CCSE researchers along with Mike Zingale of Stony Brook University and Stan Woosley of University of California, Santa Cruz, describe the first-ever three-dimensional, full-star simulations of convection in a white dwarf leading up to ignition of a Type Ia supernova. The project was funded by the DOE Office of Science. Read the news release and see the video.

The work was also reported in Popular Science.


John Shalf to Give Keynote Address at iWAPT2009 in Japan

John Shalf, head of NERSC’s Science-Driven System Architecture team, will be a keynote speaker at the Fourth International Workshop on Automatic Performance Tuning (iWAPT2009), October 1–2 at the University of Tokyo, Japan. Shalf’s talk, “Green Flash: Extreme-Scale Computing on a Finite Power Budget,” will describe the Green Flash research project, which is developing an energy-efficient computing architecture that uses auto-tuning methods for both the hardware and software optimization. (Osni Marques of CRD is on the workshop program committee.)

While in Japan, Shalf will also visit Kyoto University, home of one of the major HPC systems in Japan. He will meet with faculty who are working on Japan’s 10 petaflop computing project, in which Japan is investing close to $1B, and learn more about the architecture plans. Shalf will give a presentation describing NERSC, our application workload, and our research interests in order find areas of mutual interest for potential research collaborations.


West Coast Premiere of “Hard Problems” Film at MSRI

The Mathematical Sciences Research Institute (MSRI) will present the West Coast premiere of “Hard Problems (The Road to the World’s Toughest Math Contest)” from 5:00 to 8:00 pm on Wednesday, September 30 in Simons Auditorium at MSRI. The film is a documentary about the gifted students who represented the US in 2006 at the International Mathematics Olympiad (IMO). The film will be broadcast on PBS stations starting in October 2009.

The screening will be preceded by a reception for the director, George Csicsery, and will be followed by a panel discussion led by former IMO medal winner and coach Paul Zeitz, students from the film and the 2006 IMO team, and leaders from MSRI’s Bay Area Math Circles, Olympiad, and Festival programs.


In SCG, One Postdoc Moves Up, Another Signs On

In the Scientific Computing Group, one postdoc has accepted a career position, while another postdoc joins the group.

Rollin Thomas, who was one of the 2007 Alvarez Fellows, has accepted a career-track research scientist position in the Scientific Computing Group. A member of the Computational Cosmology Center, Thomas will continue his work on supernova research. After graduating from the University of Oklahoma in 2003, Thomas joined the Nearby Supernova Factory, a collaboration among Berkeley Lab and several other institutions in the United States and France. In 2007, he was named an Alvarez Fellow.

Christopher Calderon has joined the Scientific Computing Group as a postdoc. Prior to joining LBNL, Chris was a postdoc at Rice University. He received his B.S. in chemical engineering from Purdue University and his Ph.D. in chemical engineering from Princeton University. At LBNL, Calderon will be involved in the SciDAC TOPS (Toward Optimal Petascale Simulations) and UNEDF (Building a Universal Nuclear Energy Density Functional) projects.


This Week’s Computing Sciences Seminars

 

XcalableMP: A performance-aware scalable parallel programming language for distributed memory system — beyond PGAS models

Tuesday, September 29, 1:30-2:30 p.m., Bldg. 50F, Room 1647 (with video link to Bldg. 943, Room 238)
Mitsuhisa Sato, University of Tsukuba, Japan (http://www.hpcs.is.tsukuba.ac.jp/~msato/)

For some years ahead, peta-scale high performance computing systems which have over peta-FLOPS performance, are being built and installed in US, Japan and Europe. In order to make use of such system, the software to support parallel programming in peta-scale system is indispensable. Recently, we have launched a project for parallel programming language for a petascale distributed memory system. In this project, we are designing a directive-based language extension and programming model, called XcalableMP (XMP for short), which allows users to develop parallel programs for distributed memory systems easily and tune the performance by having minimal and simple notations. XcalableMP provides a global view programming partially inherited from HPF, and also includes Co-array feature for local view programming. In this talk, I will give our experience from HPF (High Performance Fortran), and describe the idea behinds XcalableMP.

A Genetic Programming Approach to Automated Software Repair

Tuesday, September 29, 12:30-1:30 p.m. 380 Soda Hall, UC Berkeley
Westley Weimer, University of Virginia

Automatic program repair has been a longstanding goal in software engineering, yet debugging remains a largely manual process. We introduce a fully automated method for repairing bugs in software. The approach works on off-the-shelf legacy applications and does not require formal specifications, program annotations or special coding practices. Once a program fault is discovered, an extended form of genetic programming is used to evolve program variants until one is found that both retains required functionality and also avoids the defect in question. Standard test cases are used to exercise the fault and to encode program requirements. After a successful repair has been discovered, it is minimized using structural differencing algorithms and delta debugging. We describe the proposed method and report experimental results demonstrating that it can successfully repair twenty programs totaling almost 200,000 lines of code in under 200 seconds, on average. In addition, we describe how to combine the automatic repair mechanism with anomaly intrusion detection to produce a closed-loop repair system, and empirically evaluate the resulting repair quality. Finally, we propose a test suite sampling technique for automated repair, allowing programs with hundreds of test cases to be repaired in minutes with no loss in repair quality.

Spectral Methods for Parameterized Matrix Equations

September 30, 11:10 a.m.-12 p.m., 380 Soda Hall, UC Berkeley
Paul Constantine, Stanford University

Parameter-dependent systems arise frequently in models for engineering applications. We apply polynomial approximation methods — known in the numerical PDEs context as spectral methods — to approximate the vector-valued function that satisfies a given parameterized matrix equation.

Global Integrated Monitoring for the Environment, Public Health, and Disaster Management

Wednesday, September 30, 12-1 p.m., Banatao Auditorium, Sutardja Dai Hall, UC Berkeley
Kevin Montgomery, CEO, Intelesense Technologies

Never before in the history of mankind has our ability to acquire data about our planet and its inhabitants been so prolific. Data from in-situ sensors deployed worldwide, human-generated data (surveys, land use patterns, etc), and satellite imagery and remote sensing data have produced an enormous amount of information totaling in the hundreds of gigabytes per day. This data, coupled with the broad ubiquity of the Internet has led to an unprecedented wealth of information available at every desktop. However, while each group or project may be an expert in their own area of focus and thus develop data and knowledge in their field, the environment is inherently integrated with the climate effecting the soil and oceans, the plants and animals reacting to their ecological context, and all related to a humankind that is both affected by, as well as an agent of change, of their environment.  What if it were possible to integrate all the information that has ever been collected and to make it freely and easily available, within an environment that allowed one to browse this information regardless of source or format, integrate it across all fields of study and to visualize it interactively, collaboratively, quickly and easily? Such a tool could allow the climatologist to examine their interrelation with the biologist, ecologist, and other environmental researchers, as well as the epidemiologist and others related to humankind.

The Case "Against" the Smart Grid

Friday, October 2, 12-1 p.m., 250 Sutardja Dai Hall, UC Berkeley
Bruce Nordman, Energy Analysis Department, LBNL

Amid all the current cheerleading around the "Smart Grid", there is a lack of critical thinking about the design choices underlying the dominant paradigms being put forward, and how current efforts do or don't relate to any long-term strategy we have. In addition, in the past decades we have learned a great deal about how to design and architect networks, how they evolve, how they have collided with energy use, and how they work in practice.  This talk will explore some of these topics, and suggest a path forward that could result in significantly more energy savings than our present one.  



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.