InTheLoop | 02.10.2014
NERSC Announces Second Annual HPC Achievement Awards
NERSC announced the winners of its second annual High Performance Computing (HPC) Achievement Awards on Feb. 4, 2014, during the annual NERSC User Group meeting. This year's winners, in four divisions, were
- The Planck Collaborative: NERSC Award for High-Impact Science - Open Division;
- Victor Ovchinnikov, Harvard University: NERSC Award for High-Impact Science - Early Career;
- Jean-Luc Vay, Berkeley Lab: NERSC Award for Innovative Use of HPC - Open Division; and
- Anubhav Jain, Berkeley Lab: NERSC Award for Innovative Use of HPC - Early Career
Sethian & Saye's Bubble Visualization Honored
A visualization created by Berkeley Lab mathematicians Robert Saye and James Sethian of soap bubbles bursting and reforming has won honorable mention in the 2013 International Science and Engineering Visualization Challenge, sponsored by Science magazine and the National Science Foundation.
In their strikingly realistic visualization, a quivering cluster of bubbles burst one-by-one with the surroundings reflected on their surfaces as they rearrange and reform. Saye is a member of the Computational Research Division’s Mathematics Group, which is led by Sethian. »Read more
In Memoriam: Michael Welcome
A celebration of life is pending for Michael Welcome, a member of NERSC’s Mass Storage Group, who collapsed at work on Thursday, Jan. 30, and subsequently died. Welcome spent his entire career working for computing organizations at Lawrence Berkeley and Lawrence Livermore national laboratories. He was 56.
During his 30-year career, Welcome made significant contributions in the areas of applied mathematics, system administration and improving the efficiency of high performance computing systems. »Read more
ACS Administrative Assistant Gail Jackson-Maeda Loses Battle with Cancer
A memorial service is pending for Gail Jackson-Maeda, who died Feb. 5 after a long battle with cancer. Jackson-Maeda had been the administrative assistant for the Advanced Computing for Science Department (ACS) in the Computational Research Division. Jackson-Maeda joined Berkeley Lab in 2008. Prior to that, she worked in the Information and Communication Services and Environmental Restoration divisions at Lawrence Livermore National Laboratory.
“Gail always brought a smiling face and an impressive work ethic to her job,” said Deb Agarwal, head of ACS. “She really loved working here at LBNL and made many friends. Gail was a vital member of the ACS department and she will be sorely missed.” Employees who would like to talk to someone outside the lab about Jackson-Maeda, Berkeley Lab provides confidential counseling through CARE Services (at no charge) on the UC Berkeley campus. They can be reached at (510) 643-7754.
Big Data Challenges at the Advanced Light Source
In this HPCWire podcast, Berkeley Lab's Dula Parkinson talks about using hard X-rays to characterize various materials, and discusses how key technologies in data management and HPC are powering his work forward. »Read More.
How NERSC Powers the "Google of Materials"
In an HPCWire podcast, Kristin Persson, co-founder of the Materials Project, reveals how her team is making use of a unique processing approach on supercomputers at NERSC to continuously calculate chemical properties of various materials and make those available to researchers seeking to develop better batteries, solar cells, and other products. »Read More.
Training for New Video Conference Equipment, Feb. 25
The video conferencing system in 50B-2222 was recently upgraded. There will be a training in that room on February 25 at 2pm. For more information, »contact James Lee.
This Week's CS Seminars
A New Framework for Analyzing Parallel Programs at Runtime
Monday, February 10, 10:00am, Bldg. 50B, Room 4205
Michelle Goodstein, Carnegie Mellon University
Despite the best efforts of programmers and programming systems researchers, software bugs continue to be problematic. This talk will focus on a new framework for performing dynamic (runtime) parallel program analysis. Existing dynamic analysis tools have focused on monitoring sequential programs. Parallel programs are susceptible to a wider range of possible errors than sequential programs, making them even more in need of online monitoring. Unfortunately, monitoring parallel applications is difficult due to inter-thread data dependences and relaxed memory consistency models.
I will present dataflow-analysis-based dynamic parallel monitoring, a novel software framework that avoids these pitfalls without relying on strong consistency models or detailed inter-thread dependence tracking. Using insights from dataflow analysis, my frameworks enable parallel applications to be monitored concurrently without capturing a total order of application instructions across parallel threads. I have shown that my frameworks are provably guaranteed to never miss an error and sacrifice precision only due to the lack of a relative ordering among recent events, and will present experimental results on performance and precision from my implementations.
Numerical Upscaling and Algebraic Multigrid for Mixed Finite Element discretizations
Tuesday, February 11, 11:30am - 12:30pm, Bldg. 50B, Room 2222
Umberto Villa, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
The mixed finite element method is a natural, conservation preserving, way to discretize a large class of partial differential equations (PDEs) that model physical problems of practical relevance in various fields of engineering, including fluid-dynamics, solid mechanics, and electromagnetism. Many applications of these models feature a multi-physics and multi-scale nature that poses a substantial challenge to state-of-the-art solvers. Upscaling techniques can reduce computational cost by solving coarse scale models that take into account interactions at different scales.
In this talk, we will introduce a novel numerical upscaling technique that can be applied in the settings of mixed finite element discretizations and unstructured meshes. Our approach is based on a specialized element-based agglomeration technique that allows us to construct hierarchies of coarse spaces that possess stability and approximation properties for wide classes of PDEs. More specifically, the spaces resulting from the agglomerated elements are subspaces of the original de Rham sequence of H1-conforming, H(curl)-conforming, H(div)-conforming, and L2 spaces associated with an unstructured “fine” mesh. The procedure can be recursively applied so that a hierarchy of nested de Rham sequences can be constructed. This hierarchy exhibits approximation properties comparable to those of the original fine-grid spaces and can be employed as a discretization tool in multilevel MonteCarlo processes or as an algebraic multigrid (AMG) preconditioner for iterative solvers. Numerical results will illustrate the validity of our approach both as a discretization tool (upscaling) and as a solver (AMG). An application to subsurface flow simulation will also be presented. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Wednesday, February 12, 1:00pm - 2:00pm, Bldg. 50B, Room 1237 (ESnet NOC)
Luigi Rizzo, Universita` di Pisa, Italy
netmap is a network I/O framework for FreeBSD and Linux that provides a 10-fold speedup over ordinary OS mechanisms. netmap uses less than one core to saturate a 10 Gbit/s interface with minimum size frames (14.88 Mpps) or switch over 20 Mpps on virtual ports of a VALE switch (part of the netmap module).
In the past two years we have extended the framework in many ways, and it can now replace native in-kernel software switches, accelerate networking in virtual machines, and be used by unmodified applications based on libpcap.
In this talk we will give an overview of the current features of netmap and the VALE software switch, and discuss upcoming work in using its key performance enhancement techniques to accelerate processing in the network protocol stack.
Surrogate model algorithms for computationally expensive, black-box, global optimization problems
Thursday, February 13, 10:00am - 11:00,Bldg. 50B, Room 4205
Juliane Mueller, Cornell University
Surrogate model algorithms for computationally expensive, black-box, global optimization problems Abstract: This talk focuses on algorithms developed for solving computationally expensive black-box global optimization problems. These problems are encountered in application areas such as, for example, structural optimization, carbon sequestration, watershed management, and climate model research where time consuming simulations have to be run in order to obtain objective (and constraint) function values.
The algorithms discussed here exploit information from a repeatedly updated surrogate model (also known as response surface model or metamodel), which is a computationally inexpensive approximation of the true objective function and helps to decide at which points in the variable domain the next computationally expensive function evaluation should be done. Hence, compared to optimization algorithms that rely on numerical differentiation or evolutionary strategies, fewer expensive simulations are required to find (near) optimal solutions and the optimization time is significantly lower. We discuss algorithms for optimization problems with continuous and mixed-integer variables. Within the scope of continuous optimization, various choices of surrogate models have been examined and radial basis functions as well as ensembles of radial basis functions and cubic polynomial regression models proved to be the best. A surrogate model algorithm for mixed-integer problems has been developed and compared in numerical experiments to a genetic algorithm and NOMAD. The surrogate model algorithm was shown to perform significantly better than the alternative approaches for 12 out of 15 problems including application problems arising in structural optimization and optimal reliability design. Hence, algorithms using surrogate models are a promising option when optimizing computationally expensive black-box problems with integer constraints.
Deflation-based Domain Decomposition Preconditioners
Friday, February 14, 2014, 10:00am - 11:00am Bldg. 50F, Room 1647
Pierre Jolivet, Laboratoire Jacques-Louis Lions Universite Pierre et Marie Curie
Domain decomposition methods are widely used in applied mathematics for solving partial differential equations and are regarded as highly scalable algorithms, alongside multigrid methods. Making those methods scalable to thousands of processors is however not a straightforward task. Projection operators are one of the essential tools for achieving scalability: they are used for building deflation preconditioners. I will present a C++ framework, whose efficiency is assessed by a comparison with algebraic multigrid solvers, accompanied by theoretical results to show how it can solve ill-conditioned problems with billions of unknowns. I will also talk about some on-going work on coarse space recycling for improved efficiency, as well as how the projection operators can be used to pipeline communications and decrease the number of synchronizations inside a Krylov solver.
Adaptive Finite Element Approximations for Kohn-Sham Equations
Friday, February 14, 2014, 2:00pm - 3:00pm, Bldg. 50F, Room 1647
Aihui Zhou, Chinese Academy of Sciences
In this presentation, we will talk about adaptive finite element approximations for Kohn-Sham equations. We will introduce an adaptive finite element algorithm with a quite general marking strategy and prove the convergence of the adaptive finite element approximations. Using Dorfler's marking strategy, we then get the convergence rate and quasi-optimal complexity. We will also report several typical numerical experiments that not only support our theory, but also show the robustness and efficiency of the adaptive finite element computations in electronic structure calculations. This presentation is based on some joint works with Huajie Chen, Xiaoying Dai, Xingao Gong, and Lianhua He.