A-Z Index | Phone Book | Careers

InTheLoop | 03.02.2015

March 2, 2015

First Direct Observations: Greenhouse Effect Increasing at Earth's Surface

For the first time ever, scientists have observed an increase in carbon dioxide’s (CO₂) greenhouse effect at the Earth’s surface. The researchers measured atmospheric CO₂’s increasing capacity to absorb thermal radiation emitted from the Earth’s surface over an 11-year period at two locations in North America. The research was reported in the journal Nature and the team used supercomputers at the Department of Energy's National Energy Research Scientific Computing Center (NERSC) to help achieve this result. »Read more.

New Algorithm Speeds Simulations of Ultrafast Processes

To combat the high computing cost associated with real-time simulations in atomic-level materials research, Berkeley Lab's Lin-Wang Wang co-developed a new real-time time-dependent density function theory algorithm. The algorithm, developed on NERSC's Hopper, allows scientists to simulate phenomena for systems of around 100 atoms and opens the door for efficient real-time simulations of ultrafast processes and electron dynamics. »Read more.

Raindrops' Physics May Affect Climate Model Accuracy

Big clouds get bigger while small clouds shrink may seem like a simple concept, but the mechanisms behind how clouds are born, grow and die are surprisingly complex. Using NERSC supercomputers, researchers from the Pacific Northwest National Laboratory (PNNL) found that factors as small as how sizes of raindrops were represented in a computer model made a big difference in the accuracy of results. This work could help climate scientists better understand future weather patterns and global climate change. »Read more.

ESnet Opens 40G perfSONAR Host for Network Performance Testing

ESnet has deployed the first public 40Gbps production perfSONAR host directly connected to an R&E backbone network, allowing research organizations to test and diagnose the performance of network links up to 40 gigabits per second.

The host, located in Boston, Mass., is available to any organization in the research and education (R&E) networking community. More and more, organizations are setting up their own 40 Gbps data transfer nodes to help systems keep up with the increasing size of research data sets. »Read more.

Innovation in Networking Award Goes to ESnet

The Department of Energy’s Energy Sciences Network (ESnet) is being honored by the Corporation for Education Network Initiatives in California (CENIC) as a recipient of the 2015 Innovations in Networking Award for High-Performance Research Applications for their 100-Gigabit Software-Defined Networking (100G SDN) Testbed.

The ESnet 100G SDN Testbed provides network researchers with a realistic environment for testing 100G application/middleware experiments. It also supports several 10G paths for SDN experiments, and will be significantly enhanced this summer with new OpenFlow v1.3 hardware.

Innovations in Networking Awards are presented each year by CENIC to highlight the exemplary innovations that leverage ultra high-bandwidth networking, particularly where those innovations have the potential to revolutionize the ways in which instruction and research are conducted or where they further the deployment in underserved areas. »Read more.

NERSC's Antypas Named One of HPCWire's People to Watch in 2015

NERSC Deputy for Data Science and Services Department Head, Katie Antypas has been named one of HPCwire's People to Watch in 2015. Every year, the web news site bestows this honor to "stars on the rise in high performance computing." »Read more.

Wehner: Climate Change Will Hit U.S. in the Breadbasket 

NBC News recently quoted Michael Wehner, a CRD senior scientist, in a story about the future of U.S. food production in the face of climate change. In the story, Wehner and other scientists warn of a looming crisis in U.S. food production due to shifts in weather patterns: "Adaptation strategies should be under way," Wehner said. "Denying this, I think, is a disservice to the public." »Read more.

Former Alvarez Fellow Awarded Sloan Fellowship

Lin Lin, an applied mathematician who was an Alvarez Fellow in the Computational Research Division from 2011-2014, has been awarded a 2015 Sloan Foundation Research Fellowship. The fellowship is among 126 announced this month by the Alfred P. Sloan Foundation. They go to outstanding U.S. and Canadian researchers whose achievements and potential identify them as rising stars and the next generation of scientific leaders, according to the foundation. The annual fellowships provide $50,000 to further the research of early-career scientists and scholars. Lin's work focuses on developing novel, efficient and reliable numerical algorithms and mathematical software tools for computational chemistry and materials science. »Read more.

Computing Sciences Bolster APS March Meeting 2015

At the American Physical Society (APS) March Meeting 2015, held this week in San Antonio, Texas, Berkeley Lab Computing Sciences researcher Andrew Canning will be presented with his APS Fellowship, announced in December 2014. Canning will also present three papers prepared with CS colleagues, as follows:

  • Jack DeSlippe (NERSC): Hybrid MPI/OpenMP First Principles Materials Science Codes for Intel Xeon Phi (MIC) based HPC: The Petascale and Beyond.
  • Bharat Medasani (CRD) and Maciej Haranczyk (CRD): Structure Defect Property Relationships in Binary Intermetallics.
  • Former post-doctoral fellow Slim Chourou (CRD): Theoretical Studies of the Optical Properties of Eu doped Barium Mixed Halides: From X-ray Storage Phosphor to Bright Scintillator.

Many other Berkeley Lab researchers are involved in the scientific program, which can be found on the meeting's web site.

Registration Open for Blue Waters Symposium

The 2015 Blue Waters Symposium will be held May 10-13, 2015, at the Sunriver Resort in Oregon. At the 2014 Blue Waters Symposium in Champaign, scientists and engineers described how NCSA's petascale Blue Waters supercomputer has advanced a range of research areas — including sub-atomic physics, weather, biology, astronomy, and many others. On top of that, several working groups discussed future directions for computing and data. This year's symposium will include more reports of breakthrough results as well as address three critical themes: the integration of computing and data; the impact of high-performance computing and data analysis; and education and workforce development related to high-performance computing and data. The symposium serves as the official meeting for National Science Foundation Blue Waters principal investigators. There is no registration fee, but space is limited. »Register online now.

Register Now for Free MOOSE Workshop

March 24-26 UC Berkeley will host a free MOOSE (Multiphysics Object Oriented Simulation Environment) framework workshop. This is an intensive three-day introduction to MOOSE, an open-source, object-oriented framework for coupled multiphysics simulations. MOOSE has been developed by the Idaho National Laboratory since 2008 and has been applied to many areas of science and engineering including nuclear engineering, material microstructure evolution, chemistry, geomechanics and superconductivity. The workshop will be held in room 190 of the Doe Library on the UC Berkeley campus. Space is limited, so those interested should »register as soon as possible. »Contact Max Fratoni for more information.

This Week's CS Seminars

From Quantum Chemistry to Quantum Computing 

Monday, March 2, 9:30 – 10:30 a.m., Bldg. 50B, Room 4205

Jarrod McClean, Harvard University

Quantum chemistry offers a route to prediction of material and chemical properties without the need for empirical parameterization.Unfortunately, exact solution of the governing equations is prohibitively expensive, and many of our best approximations can break down without warning. Recently, quantum computation has emerged as a natural solution to this problem. By using a quantum computer to emulate a quantum system, exact solutions can once again become feasible, albeit under different conditions. In this talk I will introduce the challenges of quantum chemistry and how a quantum computer can help to overcome them. I will then detail our recent work in the development an algorithm capable of using minimal quantum resources to gain an advantage over classical computation before a universal quantum computer is constructed. Finally, I will discuss insights into the development of new classical algorithms gained through thinking in the language of quantum computation.

Analysis and Simulation of Molecular Systems: Memory function approach and uncertainty quantification

Tuesday, March 3, 10:30–11:30 a.m., Bldg. 50B, Room 4205

Changho Kim, Division of Applied Mathematics, Brown University

Understanding the roles of microscopic interactions and internal fluctuations becomes indispensable for faithful description of multiscale phenomena in many areas, including biology, nanotechnology, and soft matter. In the first part of the talk, I will introduce the Mori-Zwanzig formalism and the memory function approach, which provide an effective tool to investigate the microscopic origin of fluctuations in stochastic mesoscopic description. For a colloidal particle in a molecular fluid, the following are investigated by the memory function approach: (1) finite-mass effect of the colloidal particle, (2) microscopic origin of the friction force, (3) long-time algebraic decay of the velocity autocorrelation function, and (4) confinement effects. In the second part, I will present a theoretical and computational framework for the uncertainty quantification in molecular dynamics (MD) simulations. Compared with the continuum-level description, molecular systems are subject to internal fluctuations, which result in statistical errors in MD simulation results. For the evaluation of the diffusion coefficient of the colloidal particle, both statistical and systematic errors are investigated.

Towards Reliable and Automated Simulations in Continuum Mechanics

Wednesday, March 4, 9:30 - 10:30 a.m., Bldg. 50B, Room 2222

Masayuki Yano, Department of Mechanical Engineering, Massachusetts Institute of Technology

I present work towards the development of reliable computational tools for partial differential equations (PDEs) in continuum mechanics. Here, reliability refers to the ability to estimate and control the two sources of error in numerical predictions: the model error that arises from mathematical modeling of the true physics; the discretization error that arises from numerical approximation of the mathematical model. The goal is to develop reliable computational tools that maximize their predictive potential and utility in understanding physical phenomena and ultimately making engineering decisions.

In the first part of the talk, we develop a versatile adaptive finite element PDE solver. The solver consists of three key ingredients: a high-order discontinuous Galerkin method; an output error estimate; and an anisotropic mesh optimization strategy. The adaptive solver iterates toward a mesh that minimizes the output error for a given computational cost in an automated manner. We demonstrate the effectiveness of the strategy for aerodynamic flows over a wide range of Reynolds and Mach regimes.

In the second part of the talk, we develop a model reduction strategy that provides rapid and reliable solution of parametrized PDEs in real-time and many-query contexts. The marginal cost of evaluation is minimized through an offline-online computational decomposition: the offline stage incorporates a simultaneous spatio-parameter adaptivity that identifies optimal mesh-refinement and parameter-sampling sequences; the online stage provides an certified output for any parameter value in complexity independent of the finite element resolution. We demonstrate the effectiveness of the strategy for parametrized elasticity systems.

High-Resolution, Non-Oscillatory Finite-Difference Schemes for Compressible Flows

Wednesday, March 4, 10:30am - 11:30am, Bldg. 50B, Room 4205

Debojyoti "Debo" Ghosh, Mathematics and Computer Science Division, Argonne National Laboratory

Numerical methods for compressible, turbulent flows need a high spectral resolution to model the relevant length scales; in addition, they must yield non-oscillatory solutions across discontinuities and sharp gradients. The first part of this talk will focus on the derivation, analysis, and implementation of the compact-reconstruction weighted essentially non-oscillatory (CRWENO) scheme that combines the high spectral resolution of compact finite-difference schemes with the non-oscillatory behavior of the WENO scheme, and its performance on benchmark flow problems. The CRWENO scheme is a part of a larger family of non-linear compact schemes that require the solution to banded systems of equations at each time-integration step/stage and past attempts at parallelizing such methods resulted in poor scalability. An efficient, scalable implementation for non-linear, tridiagonal compact schemes will be described, and the computational efficiency of this implementation will be demonstrated for massively parallel simulations.

The second part of the talk will describe the development of a high-resolution conservative finite-difference algorithm for atmospheric flows based on the WENO and CRWENO schemes that solves the Euler/Navier-Stokes equations in terms of the mass, momentum, and energy. A well-balanced formulation for the governing equations will be introduced that ensures that the hydrostatic balance is preserved by the spatial discretization. Explicit time-integration methods suffer from a linear stability restriction based on the acoustic mode. A characteristic-based flux splitting for implicit-explicit (IMEX) time-integration will be proposed that separates the entropy (slow) and acoustic (fast) modes in the hyperbolic flux. The former is integrated explicitly while the latter is integrated implicitly, resulting in the time step size being limited by the flow velocity. The performance of this approach will be demonstrated on benchmark atmospheric flow problems.

Applied Mathematics: Introduction to Gradient-weighted Moving Finite Elements

Wednesday, March 4, 2:30 – 3:30 p.m., 939 Evans Hall, UC Berkeley Campus

Keith Miller, University of California, Berkeley

GWMFE is especially suited to PDE problems with sharp moving fronts. The moving nodes tend to concentrate and move with the fronts, allowing far fewer nodes and much larger timesteps. It does this by treating the solution as an evolving manifold and discretizes with an evolving piecewise linear manifold. I will explain the variational and mechanical interpretations of GWMFE, our BDF2 stiff ODE solver,and our nonlinear Krylov solver for the implicit equations. I will show 2D graphics for the Shallow Water Equations, for Normal and Vertical Motion by Mean Curvature, for the Stefan Problem for melting ice, and also an example from Neil Carlson's 3D GWMFE code. I will discuss the necessity of adding global adaptivity to our codes (insertion and deletion of nodes, flipping edges) and my largely unsuccessful search for "stabilized" versions of MFE which prevent nodes from drifting with the flow in transient advection problems.

PeerSync: Integrating the GPU with a Network Interface

Thursday, March 5, 1–2 p.m., Bldg. 50F, Room 1647

Davide Rossetti, Software Engineer, NVIDIA

In the GPU off-loading programming model, the CPU is the initiator, e.g. it prepares and orchestrates work for the GPU. In GPU-accelerated multi-node programs, the CPU has to do the same for the network interface as well. But the truth is that both the GPU and the network have sophisticated hardware resources, and these can be effectively short-circuited so to get rid of the CPU altogether. Meet PeerSync, which is a set of CUDA-Infiniband Verbs interoperability APIs which opens an unlimited number of possibilities. It also provides a scheme to go beyond the GPU-network duo, i.e. effectively employing the same ideas to other 3rd party devices.

Compiler Optimizations and Autotuning for High-Performance Stencil Codes

Friday, March 6, 10–11 a.m., Bldg. 50F, Room 1647

Prontonu Basu, School of Computing, University of Utah

As computer architectures evolve in diverse ways to meet the demands of exascale computing, programmers will be burdened to utilize the abundant on-chip parallelism and orchestrate data movement across deep memory hierarchies. The challenge of programming the architecture complexity and diversity is compounded by modern compilers which lack accurate cost models and application-specific knowledge to generate high-performance code. This forces domain scientists to spend considerable effort manually tuning their code; unfortunately this significant human effort must be repeated for each new architecture. My research aims to free application scientists from architecture-specific code tuning by building autotuners which leverage domain-specific compiler optimizations. The targets of my research on compiler-directed autotuning are stencil computations and Geometric Multigrid (GMG). Stencils are frequently used in partial differential equation (PDE) solvers and are of vital interest to the DOE community. To demonstrate the value of compiler-directed autotuning, I optimize a variety of stencils (variable-coefficient, constant-coefficient and higher-order) with very distinct flop/byte ratios which stress different sub-systems of the chip. Novel domain-specific transformations are used to manage data movement for memory-bound stencils, and optimizations developed to reduce floating point operations and manage register reuse are used for compute-intensive higher-order stencils. The transformations are implemented in a polyhedral compiler framework and autotuner. In collaboration with domain scientists and code optimization experts from LBL the stencils were implemented and optimized for GMG. Our compiler-generated code was several times (up to 4x) faster than state-of-the-art compilers and matched the performance of code tuned manually by experts. Furthermore, our optimizations for higher-order stencils derived an optimized 10th order solver running at half the speed of an optimized 2nd order solver, but giving many orders of magnitude better accuracy.

Lessons Learned from Operating Gordon, a Data-intensive Supercomputer: Utility, usability and utilization

Friday, March 6, 10-11 a.m., NERSC OSF 943, Conference Room 238

Glenn Lockwood, 10X Genomics, Inc.

Gordon, a 1024-node HPC system deployed by the San Diego Supercomputer Center, was architected with a number of unique features designed to address the needs of data-intensive scientific computing. These features (including 300 TB of flexibly provisioned flash storage, two InfiniBand torus fabrics, and virtualization-based large-memory nodes) were unprecedented when the system entered production and still remain very exotic among large-scale commodity-based clusters. However, Gordon has been heavily utilized by researchers across both compute- and data-intensive research communities over the last three years, and a wealth of workload and performance data has emerged.

These data contain compelling information about which architectural features are (and are not) productive for data-intensive applications, and this talk will discuss the performance and historic utilization of Gordon's flash subsystem, dual-fabric interconnect, and virtualization capabilities. These findings will then be used to contextualize key design decisions made for Comet, SDSC's upcoming 46,000-core system.

Thermodynamically Consistent Modeling and Simulation of Multiphase Flows

Friday, March 6, 11 a.m. - 12 p.m., Bldg. 50A, Room 5132

Ju Liu, Institute for Computational Engineering and Sciences, The University of Texas at Austin

Multiphase flow is a familiar phenomenon from daily life and occupies an important role in physics, engineering, and medicine. The understanding of multiphase flows relies largely on the theory of interfaces, which is not well understood in many cases. To date, the Navier-Stokes-Korteweg equations and the Cahn-Hilliard equation have represented two majormbranches of phase-field modeling. The Navier-Stokes-Korteweg equations describe a single component fluid material with multiple states of matter, e.g., water and water vapor; the Cahn-Hilliard type models describe multi-component materials with immiscible interfaces, e.g., air and water. In this work, a unified multiphase fluid modeling framework is developed based on rigorous mathematical and thermodynamic principles. This framework does not assume any ad hoc modeling procedures and is capable of formulating meaningful new models with an arbitrary number of different types of interfaces.

In addition to the modeling, novel numerical technologies are developed in this work focusing on the Navier-Stokes-Korteweg equations. First, the notion of entropy variables is properly generalized to the functional setting, which results in an entropy-dissipative semi-discrete formulation. Second, a family of quadrature rules is developed and applied to generate fully discrete schemes. The resulting schemes are featured with two main properties: they are provably dissipative in entropy and second-order accurate in time. In the presence of complex geometries and high-order differential terms, isogeometric analysis is invoked to provide accurate representations of computational geometries and robust numerical tools. A novel periodic transformation operator technology is also developed within the isogeometric context. It significantly simplifies the procedure of the strong imposition of periodic boundary conditions. These attributes make the proposed technologies an ideal candidate for credible numerical simulation of multiphase flows.

A general-purpose parallel computing software, named PERIGEE, is developed in this work to provide an implementation framework for the above numerical methods. A comprehensive set of numerical examples has been studied to corroborate the aforementioned theories. Additionally, a variety of application examples have been investigated, culminating with the boiling simulation. Importantly, the boiling model overcomes several challenges for traditional boiling models, owing to its thermodynamically consistent nature. The numerical results indicate the promising potential of the proposed methodology for a wide range of multiphase flow problems.

PeerSync: Integrating the GPU with a Network Interface

Friday, March 6, 1 - 2 p.m., Bldg. 50F, Room 1647

Davide Rossetti, Software Engineer, NVIDIA

In the GPU off-loading programming model, the CPU is the initiator, e.g. it prepares and orchestrates work for the GPU. In GPU-accelerated multi-node programs, the CPU has to do the same for the network interface as well. But the truth is that both the GPU and the network have sophisticated hardware resources, and these can be effectively short-circuited so to get rid of the CPU altogether. Meet PeerSync, which is a set of CUDA-Infiniband Verbs interoperability APIs which opens an unlimited number of possibilities. It also provides a scheme to go beyond the GPU-network duo, i.e. effectively employing the same ideas to other third-party devices.

BIDS Data Science Lecture - Election Forensics: Distinguishing strategic behavior from frauds

Friday, March 6, 1 - 2:30 p.m., 190 Doe Library, UC Berkeley Campus

Walter Mebane, Professor of Political Science and Professor of Statistics, University of Michigan

Election forensics is the field devoted to using statistical methods to determine whether the results of an election are accurate: whether the results are the collective choice implied by citizens' intentions given the election rules. The fundamental challenge for election forensics is that strategic behavior is a ubiquitous aspect of political activity, and both strategic behavior and frauds cause patterns in election results that may appear anomalous in statistical estimates and tests. I use data from elections in several countries to illustrate how several models used in election forensics—including digit tests and tests focused on the normality of turnout and vote proportions—do not correctly discriminate between strategic behavior and frauds. Confounds with strategies are apparent even when geographically weighted estimation and measures of postelection complaints are considered. Only one method has so far given clear signs that it does not confuse strategic behavior and frauds. Latent dimensions of frauds seem to underlie that method's estimates and postelection complaints.