A-Z Index | Phone Book | Careers

InTheLoop | 07.14.2014

July 14, 2014

CRD's Prabhat Gives Talk on Taming Big Data at Climate Workshop

Prabhat, a researcher in CRD's Scientific Computing Group, is giving a talk today on “Big Data Analytics at Scale: Lessons learned from processing CMIP-5 on Mira” at the Focused Technical Workshop (FTW) on Climate Science co-organized by ESnet and Internet2. In his talk, Prabhat describes a project with ESnet's Eli Dart to download a total of 56 terabytes of data from 21 different sites around the world over the course of three months. After two weeks of preprocessing the data on Hopper at NERSC, the resulting 15 TB was moved to the Mira IBM supercomputer, where it was analyzed on 750,000 processors using TECA, the Toolkit for Extreme Climate Analysis that he and colleagues at Berkeley Lab have been developing over the past three years. »Read more.

Office of Science Web Site Commemorates NERSC's 40th Anniversary

The Department of Energy's Office of Science is commemorating NERSC's 40th Anniversary on its homepage today. The story focuses primarily on the NERSC Nobel Lecture series. »Read more.

Back to top

Hot Plasma Partial to Bootstrap Current, Fusion Simulations Show

Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. By fusing together, rather than splitting atoms, scientists hope to tap a nearly limitless and clean energy source fueled by the same reaction that powers the sun. To do that, researchers heat gases inside doughnut-shaped metal chambers to millions of degrees and confine the resulting hot plasma using magnetic forces. But there's a lot going on inside the plasma as it heats up, not all of it good. Driven by electric and magnetic forces, charged particles swirl around and collide into one another, and the central temperature and density are constantly evolving. In addition, plasma instabilities disrupt the reactor’s ability to produce sustainable energy by increasing the rate of heat loss.

Fortunately, research has shown that other, more beneficial forces are also at play within the plasma. For example, if the pressure of the plasma varies across the radius of the vessel, a self-generated current will spontaneously arise within the plasma, a phenomenon known as the "bootstrap" current.

Now an international team of researchers has used NERSC supercomputers to further study the bootstrap current, which could help reduce or eliminate the need for an external current driver and pave the way to a more cost-effective fusion reactor. »Read more.

Back to top

NERSC Director to Address Italy's National Institute for Nuclear Physics

NERSC Director Sudip Dosanjh is giving a lecture this week at the Istituto Nazionale di Fisca Nucleare (INFN) in Cagliari, Italy on "Enabling Scientific Discovery Through Supercomputing and Data Analysis." The INFN is the coordinating institution for nuclear, particle and astroparticle physics in Italy and collaborates with CERN and various other laboratories in the world. He will discuss the history of supercomputing and future technology trends, including the challenges associated with reaching Exascale.  Sudip is also giving an overview of NERSC and describing NERSC's extreme-scale computing and data strategies. He will discuss the Cori system and NERSC's plans for migrating science  applications to this new architecture. In addition, he will describe how Cori will meet both the scientific computing and data analysis needs of NERSC's broad set of users.

Back to top

This Week's CS Seminars

»CS Seminars Calendar

All-Electron Implementation of the GW Approximation and Its Application to Semiconductors

Monday, July 14, 2014, 10-11:00am, 50B-4205

Iek-Heng Chu, Physics Department, University of Florida

The widely used density functional theory (DFT) has achieved great success in predicting physical properties of various systems. However, current DFT exchange-correlation functionals are inadequate for providing accurate information about the electronic band gap and excited states. The GW approximation, within many-body perturbation theory, is often adopted to improve DFT electronic structure results and yields quasiparticle energies that agree with experiments. In this seminar, I will present an all-electron implementation of the GW approximation based on the full-potential linearized augmented plane wave (FP-LAPW) method. One advantage of working in the FP-LAPW framework is that the widely used pseudopotential approximation, in which the pseudo wave functions are used and the core-valence interaction is treated at the DFT level, is not required. The GW formalism will first be introduced. Then the results for several semiconductors will be discussed and the calculated band gaps will be compared with experimental values.

Selecting the proper compiler optimization strategy using program analysis and machine learning

Friday, July 18, 2014, 9:30am - 11:00am, 50B-4205

Eun Jung Park, University of California, San Diego

By providing large number of optimizations, modern compilers have become more powerful and have been able to benefit programs in different metrics such as performance improvements, efficient power usage, or reduced binary size. However, this powerful feature of modern compilers brings developers to face another problem -- we need an intelligent way of utilizing compiler optimizations to take full benefit of using them. In this seminar, we discuss three program analysis techniques - performance counters, graph-based representations, and source level patterns - in terms of building an effective machine learning based model to select the proper compiler optimizations. We compare these techniques in terms of prediction capability to optimize applications better which ultimately helps users to take more benefit of emerging architectures than hand tuning or standard compiler optimizations. We also compare how each technique can assist users to understand applications better and extract the important computations in them.

Overview of the Exascale Challenges and ASCR's Exascale Program

Fri, July 18, 2014, 11:00am – 12:30pm, 50B-4205

John Shalf, Computer and Data Sciences Department, Lawrence Berkeley National Laboratory

This talk kick off the Berkeley Lab Computing Sciences Exascale Seminar Series, a  multiweek lecture series that will describe ongoing research to prepare for HPC systems for the coming decade. This has been typically referred to as "Exascale" but the changes are affecting computing technology at all scales. The talk will provide an overview of the challenges posed by the physical limitations of the underlying silicon based CMOS technology, introduce the next generation of emerging machine architectures, and the anticipated effect on the way we program machines in the future.

For the past twenty-five years, a single model of parallel programming (largely bulk-synchronous MPI), has for the most part been sufficient to permit translation of this into reasonable parallel programs for more complex applications. In 2004, however, a confluence of events changed forever the architectural landscape that underpinned our current assumptions about what to optimize for when we design new algorithms and applications. We have been taught to prioritize and conserve things that were valuable 20 years ago, but the new technology trends have inverted the value of our former optimization targets. The time has come to examine the end result of our extrapolated design trends and use them as a guide to re-prioritize what resources to conserve in order to derive performance for future applications.

This talk will describe the challenges of programming future computing systems. It will then provide some highlights from the search for durable programming abstractions more closely track track emerging computer technology trends so that when we convert our codes over, they will last through the next decade.