A-Z Index | Phone Book | Careers

InTheLoop | 06.22.2015

June 22, 2015

Mueller Optimizes Codes to Cut Computational Costs

As supercomputers play an increasingly important role in scientific research, demand for time on the systems is also increasing, with researchers wanting up to 10 times more than is available. As a result, scientists need to make the most efficient use of their time allocations. This is especially true when running simulations of complex problems like climate change and astrophysics that can require tens of thousands of processors for days or weeks on end.

Such simulations are often described as “computationally expensive” because of the amount of resources they consume. Finding a way to keep expenses down while still getting the most science from the simulations is the specialty of Juliane Mueller, the 2014 Alvarez Fellow in the Computational Research Division at Lawrence Berkeley National Laboratory. She’s a member of the Center for Computational Sciences and Engineering, which develops algorithms to study complex problems ranging from combustion to supernovae. »Read more.

Friesen is First Post-doc in NERSC's Exascale Science Applications Program

The first of eight post-doctoral researchers participating in the NERSC Exascale Science Applications Program (NESAP) is now working full time at NERSC. Brian Friesen, a graduate student in computational astrophysics at the University of Oklahoma, joined the NESAP team May 18. He’s been assigned to Ann Almgren’s group in Berkeley Lab’s Center for Computational Sciences and Engineering, where he is focused on making improvements to BoxLib, an adaptive mesh refinement framework used in a variety of scientific codes.

During his year-long stint as a NESAP post-doc, Friesen will work to optimize or re-design existing algorithms in BoxLib to take advantage of the massive concurrency of Cori, NERSC’s next-generation supercomputer that features a new manycore architecture. »Read more.

Turner to Retire from NERSC User Services Group

Long-time User Services Group consultant David Turner is hanging up his headset after 17 years at NERSC. Turner, whose last official day is June 26, answers a few questions about his years at NERSC, like how he first became interested in computer science and the diverse career path that ultimately brought him to Berkeley Lab. »Read more.

NERSC, CRD Staff Win Best Paper Award at FTXS Workshop

Brian Austin of NERSC and Eric Roman and Xiaoye “Sherry” Li of the Computational Research Division won the Best Paper Award for “Resilient Matrix Multiplication of Hierarchical Semi-Separable Matrices” at the Fault Tolerance for HPC at eXtreme Scale (FTXS) Workshop. FTXS was held June 15-19 in Portland, Ore., as part of the 24th International ACM Symposium on High Performance Distributed Computing. »Read more.

ESnet's Fasterdata is Subject of Podcast

For over 25 years, ESnet has operated an advanced research network with the goal of enabling the highest levels of performance for the Department of Energy (DOE) scientific community. During this time, ESnet engineers have identified a common set of issues that hinder performance and they share their experiences and findings in the Fasterdata knowledge base. Hosted at fasterdata.es.net, the collection of proven, operationally sound methods for troubleshooting and solving performance issues is open to all. In a recent Research Computing and Engineering (RCE) podcast, ESnet's Eli Dart explains what Fasterdata is all about. »Listen to the podcast.

Mercury News: Berkeley Lab scientists search Amazon for clues to impacts of climate change, drought

In an article published last week, the San Jose Mercury News highlighted the work of Berkeley Lab scientists who use NERSC in their climate studies:

By peering deeply into the Amazon, which has experienced two "megadroughts" in the past decade, scientists may be able to better predict the impact of climate change globally...and also collect clues about how California's vegetation might be affected by drought in the coming years, such as if and when trees might start dying off en masse.
Once data is collected, the researchers will combine their work with more intensive field investigations in subsequent phases to build a new rainforest model using supercomputers such as the National Energy Research Scientific Computing Center (NERSC) in Oakland.

»Read more.

Retiring this Year? Let Us Know.

The monthly Berkeley Lab Computing Sciences newsletter is cultivating a list of all 2015 retirees in Computing Sciences for a mention in the June 2015 edition of the newsletter, which will be published on June 30. The newsletter is distributed to all Computing Sciences staff, the Berkeley Lab Directorate, DOE Office of Science Program Managers and a select list of interested subscribers. If you are retiring this year and would like to be mentioned in the newsletter, please email Linda Vu at »lvu@lbl.gov.

PopSci Contest Calls for Stunning Scientific Visuals

Once again, Popular Science has teamed up with the National Science Foundation to issue a challenge: Can you visualize a scientific idea, concept or story in an arresting way? If so, submit your work to the 2016 Vizzies. The competition offers five categories: photography, illustration, posters and graphics, interactive, and video, which should cover just about every way to communicate science visually. The competition is accepting submissions through September 15, at 11:59 p.m. Pacific time. »Learn more.

This Week's CS Seminars

»CS Seminars Calendar

Towards Realistic Direct Numerical Simulations of Turbulent Flows

Tuesday, June 23, 10 - 11:00 am, 943-238 Conference Room, Oakland Scientific Facility
Ankit Bhagatwala, Sandia National Laboratories

Detailed, first principles-based simulations of turbulent fluid flows called Direct Numerical Simulations (DNS) remain the ultimate benchmark for Computational Fluid Dynamics (CFD) in the field of fluid turbulence. They offer an unprecedented level of insight not available with physical experiments or model-based simulations. However, they are constrained in terms of their size and complexity by the availability and optimal use of computational resources. It is for these types of simulations, that High Performance Computing (HPC) has had the greatest impact. During this talk, I will present two studies illustrating this class of flow simulations that I have performed. The first problem is that of the interaction of a blast wave with isotropic turbulence. This is the fluid dynamic abstraction of a sub-problem encountered in Inertial Confinement Fusion (ICF), which is a fusion-based energy technology that does not have the radiation side effects of fission-based sources. The second problem is from the field of combustion called Reactivity Controlled Compression Ignition (RCCI), which is a new type of engine technology that promises very high levels of thermal efficiency, comparable to the best hybrids available today. In each case, I will present a highly simplified, but important fluid dynamic problem that was abstracted from the main that captures the key fluid dynamic features of interest and the insights gained from these simulations. Along the way, I will also discuss details of the CFD codes involved and the numerical methods deployed therein.

MPI-3.0 and MPI-3.1 Overview

Wednesday, June 24, 12 - 1:00pm, 943-238 Conference Room, Oakland Scientific Facility
Dr. Rolf Rabenseifner, High Performance Computing Center Stuttgart

After a brief introduction to the MPI Forum by Alice Koniges, Rolf Rabenseifner will give a comprehensive overview of the standard. MPI-3.0 was approved by the MPI Forum on September 21, 2012, and MPI-3.1 was released June 4, 2015. MPI-3.0 is a major update to the MPI standard. The updates include the extension of collective operations to include nonblocking versions and sparse and scalable irregular neighbor collectives, a new Fortran 2008 binding, a new tools interface, new routines to handle large counts, and extensions to the one-sided operations, which also adds a new shared memory programming model. MPI-3.1 is mainly an errata update to the MPI standard. New functions added include routines to manipulate MPI_Aint values in a portable manner, nonblocking collective I/O routines, and additional routines in the tools interface. The talk will give an overview on the new methods and will discuss in more detail the new MPI shared memory programming model.

U.S. Toll-Free: +1 (866) 740-1260
Access Code: 6333782

CS Summer Student Seminar Series

Introduction to HPC

Thursday, June 25, 10:30am-12:00pm
OSF (Oakland Scientific Facility), 415 20th Street, 2nd Floor

Rebecca-Hartman Baker, Yulok Lam, and Ray Spence of NERSC, LBNL

What is High Performance Computing and storage, or HPC? Who uses it and why? We'll talk about these questions as well as what makes a supercomputer so super and what's so big about scientific big data. Finally we'll discuss the challenges facing system designers and application scientists as we move into the many-core era of HPC.

10:30am Arrive at OSF, check in at first floor
10:45 - 11:15am Introduction to HPC, Richard Gerber and Rebecca-Hartman Baker, NERSC
11:15am - noon Tour of NERSC machine room, Yulok Lam and Ray Spence (Please see special tour requirements below.)

   –  Closed-toe shoes are required to tour the computer room.
   –  A light sweater or jacket is recommended for the computer room visit.
   –  Please be prompt. Arrive at OSF 10-15 minutes in advance.
   –  Bring your badge.
   –  No onsite parking.
»Transportation and parking options.