InTheLoop | 08.25.2011
August 25, 2011
NERSC Announces Hopper Scaling Incentive Program
For projects that haven't yet scaled their codes to 683 or more nodes (which is the level at which a job is considered "big" on Hopper), NERSC is offering scaling incentives, mostly focused on the use of OpenMP. Any runs using 683 or more nodes will be reimbursed up to 1 million MPP hours per user, with a limit of 2 million hours per repository.
NERSC announced the incentive program in an email to users on Wednesday, July 20, and offered to provide help in developing a scaling plan. For some codes, adding OpenMP directives will allow users to scale up and run bigger science problems. Read more.
26 CS Researchers Present Results at ICIAM 2011
Last week, July 18–22, 2011, mathematical scientists from around the world gathered in Vancouver, BC, Canada for the 7th International Congress on Industrial and Applied Mathematics (ICIAM 2011). Contributions from Berkeley Lab Computing Sciences included:
- John Bell co-organized a session on “Fluctuating Hydrodynamics: Fluid Mechanics at Small Scales,” gave an invited talk on “Low Mach Number Models in Computational Astrophysics,” and co-authored “Finite Volume Methods for Fluctuating Hydrodynamics” and “Adaptive Algorithms for Multi-Phase Flow in Porous Media.”
- Aydin Buluc co-authored and presented “High-Performance Combinatorial Algorithms for the Analysis of the Electric Power Grid.”
- Andrew Canning presented “An MPI/OpenMP Implementation of 3D FFTs for Plane Wave First Principles Materials Science Codes.”
- Sven Chilton co-authored and presented a poster on “Damping of Spurious Reflections off of Coarse-Fine Adaptive Mesh Refinement Grid Boundaries.”
- Phillip Colella co-authored posters on “Damping of Spurious Reflections off of Coarse-Fine Adaptive Mesh Refinement Grid Boundaries” and “A Freestream-Preserving High-Order Finite-Volume Method for Hyperbolic Conservation Laws in Mapped Coordinates with Adaptive Mesh Refinement.”
- Jeffrey Donatelli presented “Numerical Methods for Incompressible Two-Phase Flow in Complex Geometries with Applications to Coating Flows.”
- Tony Drummond co-authored and presented "Addressing the Multi-Core Performance Challenge for High Performance Numerical Software Tools in the DOE ACTS Collection."
- Kirsten Fagnan co-authored “Adaptive Algorithms for Multi-phase Flow in Porous Media.”
- Ming Gu co-authored “Reduced Rank Regression via Convex Optimization.”
- Maciej Haranczyk presented “Tools for Analysis of Void Space of Porous Materials.”
- Xiaoye Sherry Li co-organized a session on “Application Studies of Programming Models for Multicore and GPU Clusters” and co-authored “Evaluation of MPI/OpenMP Efficiency of an Accelerator Beam Dynamics Code using Particle-in-cell Methods.”
- Michael Lijewski co-authored “Adaptive Algorithms for Multi-phase Flow in Porous Media.”
- Osni Marques co-organized the session on “Application Studies of Programming Models for Multicore and GPU Clusters,” presented “Overlapping Communications and Calculations with a PGAS Language," and co-authored "Addressing the Multi-Core Performance Challenge for High Performance Numerical Software Tools in the DOE ACTS Collection."
- Daniel Martin co-authored and presented “High-Performance Adaptive Algorithms for Ice-Sheet Modeling.”
- Peter Mccorquodale co-authored and presented a poster on “A Freestream-Preserving High-Order Finite-Volume Method for Hyperbolic Conservation Laws in Mapped Coordinates with Adaptive Mesh Refinement.”
- Juan Meza co-organized a session on “Bridging Applied Mathematics and Computational Geosciences for Environmental Management”; co-authored and presented “An Overview of the Advanced Simulation and Capability for Environmental Management (ASCEM) Program: Goals and Status”; and co-authored “High-Performance Combinatorial Algorithms for the Analysis of the Electric Power Grid.”
- Gregory Miller co-authored and presented “Multiscale Simulation of Polymer Flow using Adaptive Sampling.”
- Esmond Ng organized a session on “Large Scale Ice Sheet Modeling and Simulation,” presented “ISICLES: Ice Sheet Initiative for Climate Extremes,” and co-authored “High-Performance Adaptive Algorithms for Ice-Sheet Modeling.”
- George Pau co-authored and presented “Adaptive Algorithms for Multi-Phase Flow in Porous Media.”
- Per-Olof Persson presented “High Fidelity Simulations of Flapping Wings Designed for Energetically Optimal Flight.”
- Chris Rycroft presented “A Mechanical Model of Mammalian Acinus Growth.”
- Robert Saye presented “A New Multiphase Level Set Method and Applications.”
- James Sethian organized a session on “Advances in Advancing Interfaces: Robust Methods for Tracking Complex Interface Dynamics in Fluids and Materials”; presented “Recent Advances in PDE-based Interface Algorithms” and “Newtonian and Viscoelastic for Industrial Ink Jet Printing”; and co-authored “Direct Numerical Simulations of Solid-Fluid Coupling in Geophysical Flow Systems” and “Past Breakup Simulations of a Two Inviscid Fluid System Using Level Sets.”
- David Trebotich co-organized a session on “Computational Modeling of Multiscale Systems with Dynamic Constitutive Laws” and co-authored “Multiscale Simulation of Polymer Flow using Adaptive Sampling.”
- Chao Yang presented “Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculations.”
- Xuefei (Rebecca) Yuan co-authored and presented a poster on "Implicit, Full Coupling of the Adaptive Grid and Physical Problems in Plasma Simulation with Additive Schwarz Preconditioned Inexact Newton."
NERSC and CRD Staff Contribute to CScADS Summer Workshops
The SciDAC-funded Center for Scalable Application Development Software (CScADS) is currently holding its second week of summer workshops in Tahoe City, CA. These workshops are the fifth installment of an annual series of workshops that aim to engage the community in the challenges of leadership computing and foster interdisciplinary collaborations.
NERSC and CRD staff are giving several presentations. In last week’s (July 18–21) workshop on Leadership-class Machines, Extreme Scale Applications and Performance Strategies, David Turner gave an “Overview of NERSC Facilities and Usage,” and Yili Zheng presented “Unified Parallel C (UPC).” The August 1–4 Performance Tools for Extreme Scale Computing workshop will include David Skinner’s talk on “Describing Performance and Resource Needs of DOE Office of Science Workloads.” And Kathy Yelick is one of four organizers of the August 8–10 workshop on Libraries and Autotuning for Extreme Scale Applications; the agenda for that workshop has not been posted yet.
More LBNL CSGF Fellows Present Research at Conference
The 2011 DOE Computational Science Graduate Fellowship (CSGF) Annual Conference was held last week (July 21–23) in Arlington, Virginia. Last week’s InTheLoop item on the conference failed to mention poster presentations by several CSGF Fellows who are currently working or have worked at Berkeley Lab. Here is the complete list:
- Edward Baskerville (University of Michigan): Modeling and prediction of feeding links using trait data
- Carl Boettiger (UC Davis): Early warning signals of population collapse
- Scott Clark (Cornell): ALE: An assembly likelihood evaluation framework to estimate the quality of metagenome assemblies
- Leslie Dewan (MIT): Quantifying radiation damage to the network structure of radioactive waste-bearing materials
- Christopher Eldred (University of Utah): WRF model configuration for seasonal climate forecasting over the Bolivian altiplano
- Thomas Fai (New York University): An FFT-based immersed boundary method for variable viscosity and density fluids
- Charles Frogner (MIT): Integrative image analysis of Drosophila in situ hybridization data
- Virgil Griffith (Cal Tech): Superseding Venn information diagrams with partial entropy decomposition
- Irene Kaplow (Stanford): Integrating gene expression, chromatin modification, and transcription factor binding human embryonic stem cell datasets to understand gene regulation in development
- Eric Liu (MIT): The importance of mesh adaptation for higher-order solutions of PDEs
- Douglas Mason (Harvard): Husimi projection in graphene eigenstates
- Scot Miller (Harvard): Sources of nitrous oxide, an important greenhouse gas, over the central United States
- Britton Olson (Stanford): Shock-turbulence interactions in a rocket nozzle (Best Poster Award)
- Troy Ruths (Rice University): A sequence-based, population genetic model of regulatory pathway evolution
- Sean Vitousek (Stanford): Towards Navier-Stokes simulations of the ocean
Also, a few LBNL-associated fellows gave talks. Anubhav Jain of MIT, who will be the incoming Alvarez fellow this year, presented "The Materials Genome: An online database for the design of new materials for clean energy and beyond." Others include:
- Eric Chi (Rice University): Robustly finding the needles in a haystack of high-dimensional data
- Ying Hu (Rice University): Optical properties of gold-silica-gold multilayer nanoshells
- Sarah Richardson (Johns Hopkins): Algorithms for the design and assembly of a modular, synthetic genome for yeast
SC11 Registration Opens, Technical Program Available on Website
Registration for SC11, the conference for high performance computing, networking, storage and analysis, is now open. Attendees are encouraged to register by mid-October to save as much as $250 on their technical program registration fees. To register, go to the SC11 registration site.
SC11, the 24th annual conference in the SC series, takes place Nov. 12–18 at the Washington State Convention Center in Seattle. The SC11 Technical Program, featuring peer-reviewed papers covering a broad spectrum of technical research as well as panel discussions featuring research and industry leaders, tutorials, workshops and more, is available via an online interactive calendar.
This year saw a significant increase in high quality submissions, including 100 more technical paper submissions compared to last year. The increased submissions once again puts the paper acceptance rate at about 20 percent and means SC11 will offer the highest quality technical program, with the best and most important submissions becoming part of the program. Of the 72 papers accepted, 11 feature authors from Berkeley Lab.
In addition, attendees will have the chance to experience new technical tracks, such as State of the Practice (chaired by NERSC’s David Paul), and will benefit from an expanded program on Friday, Nov. 18. For more information, read the SC11 news release.
XSEDE Will Replace and Expand the TeraGrid
A partnership of 17 institutions (including UC Berkeley) today announced the Extreme Science and Engineering Discovery Environment (XSEDE), which they say will be the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. The National Science Foundation will fund the XSEDE project for five years, at $121 million. XSEDE will replace and expand the TeraGrid project that started more than a decade ago. Read more.
This Week’s Computing Sciences Seminars
Uncertainty Quantification Discussion Group: UQ Refinement Strategies
Monday, July 25, 10:00–11:00 am, 50B-4205
Sherry Li, LBNL/CRD
The group will discuss a paper by Charles Tong in advance of his Friday seminar (see below): “Self-validated variance-based methods for sensitivity analysis of model outputs.”
Immersive Visualization, the New Challenge
Tuesday, July 26, 9:00–10:00 am, 50F-1647
Jürgen Rurainsky, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
What will be the next challenge after 3D? 3D visualizations are taking place with a high speed, for almost every application and for almost every device. Even with the still exciting task to produce, to deliver and to display perfect 3D content, research and development centers, like the Fraunhofer HHI are looking for the next challenge. Immersive visualization, where the person becomes a part of the content, is just the next step after 3D, but also requires sophisticated knowledge in different areas. The Fraunhofer HHI with expert knowledge in the areas of autostereoscopic displays, gesture navigation, encoding, immersive media as well as computer vision and graphics, is combining these disciplines for such experience.
Toward Efficient Parallel in Time Methods for Partial Differential Equations
Wednesday, July 27, 10:00–11:00 am, 50B-4205
Mike Minion, University of North Carolina
I will discuss a strategy for the parallelization of numerical methods for partial differential equations in the temporal direction. In practice, temporal parallelization is only attractive if the temporal parallelization has greater parallel efficiency than (additional) spatial parallelization. Hence, the focus here is on constructing methods with good parallel efficiency. The method presented iteratively improves the solution at each time slice by applying spectral deferred correction sweeps to a hierarchy of discretizations at different spatial and temporal resolutions. Coarse resolution problems are formulated using a time-space analog of the full approximation scheme. Connections to the parareal algorithm and space-time multigrid methods will be discussed, and the parallel efficiency and speedup for PDEs in one, two and three dimensions will be presented.
A Geometric Multigrid Preconditioner for MicroFE Analysis for Bone Structures based on a Pointer-Less Octree
Wednesday, July 27, 2:00–3:00 pm, 50B-4205
Peter Arbenz, ETH Zurich
The state of the art method to predict bone stiffness is micro finite element ($\mu$FE) analysis based on high-resolution computed tomography (CT). Modern parallel solvers enable simulations with billions of degrees of freedom. In this paper we present a solver that works directly on the CT-image and exploits the geometric properties given by the 3D-pixel. The data is stored in a pointer-less octree. The tree data structure provides different resolutions of the image that is used for the design of a geometric multigrid preconditioner. It makes possible the use of matrix-free implementations on all levels. This new solver reduces the memory footprint by more than a factor of 10 compared to a solver that uses an algebraic preconditioner. It allows to solve much bigger problems and scales excellently on a Cray XT5 supercomputer.
Overview of Uncertainty Quantification Methods and Methodologies for Multi-Physics Applications
Friday, July 29, 2:00–3:00 pm, 50F-1647
Charles Tong, Lawrence Livermore National Laboratory
In this talk I will discuss some of our experiences with developing and using uncertainty quantification (UQ) methods and methodologies for multi-physics simulation models. Details of the presentation will include UQ approaches, general UQ methodologies, a survey of methods for dealing with high dimensional uncertain parameter space, constructing surrogate models, uncertainty and sensitivity propagation, and data fusion.
HMPP: A Directive Based Approach to Address GPUs
Friday, July 22, 3:00–4:00 pm, OSF 943-254
Denis Gerrer, CAPS Enterprise
This talk will give you an introduction on how to use HMPP to address GPU (NVIDIA and/or ATI) with your legacy application. HMPP is a source-to-source compiler that generates CUDA and/or OpenCL code from existing C or Fortran code that offers a high level abstraction for hybrid programming.
The HMPP compiler integrates powerful data-parallel backends for NVIDIA CUDA and OpenCL that drastically reduce development time. The HMPP runtime ensures application deployment on multi-GPU systems. Software assets are kept independent from both hardware platforms and commercial software. While preserving portability and hardware interoperability, HMPP increases application performance and development productivity.
This talk will expose you the main features of HMPP, making you able to write your own hybrid applications and use HMPP in an incremental way to move your application to GPUs. In this talk you will have an overview of the 2 set off directive to port your application to GPU: first set to generate a kernel and move data to the GPU, and second set to optimize your GPU kernel.
Link of the Week: The Science of Concentration
“Multitasking is a myth,” says Winifred Gallagher, author of Rapt: Attention and the Focused Life. “You cannot do two things at once. The mechanism of attention is selection: it’s either this or it’s that…. People don’t understand that attention is a finite resource, like money.”
Gallagher believes we face a fundamental challenge:
How to balance your need to know—for the first time in history, fed by a bottomless spring of electronic information, from e-mail to Wikipedia—with your need to be? To think your thoughts, enjoy your companions, and do your work (to say nothing of staring into a fire or gazing dreamily at the sky) without interruption from beeps, vibrations, and flashing lights? Or perhaps worse, from the nagging sense that when you're off the grid, you're somehow missing out?
Science’s new understanding of attention can help shape your answers to this question. The nineteenth-century psychologist William James had a key insight: “My experience is what I agree to attend to.” But Gallagher goes even farther and says that what we pay attention to is our reality. “Taking top-down control of your own experience almost always correlates with well-being.”
In a New York Times interview with John Tierney, she recommends starting your work day concentrating on your most important task for 90 minutes. At that point your prefrontal cortex probably needs a rest, and you can answer e-mail, return phone calls and sip caffeine (which does help attention) before focusing again. But until that first break, don’t get distracted by anything else, because it can take the brain 20 minutes to do the equivalent of rebooting after an interruption.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.