A-Z Index | Phone Book | Careers

InTheLoop | 02.28.2011

February 28, 2011

A Goldilocks Catalyst: Converting CO2 to Methanol

Carbon dioxide (CO2) emissions from fossil fuel combustion are major contributors to global warming. Since CO2 comes from fuel, why can't we recycle it back into fuel rather than releasing it into the atmosphere? An economical way to synthesize liquid fuel from CO2 would help mitigate global warming while reducing our need to burn fossil fuels. Catalysts that induce chemical reactions could be the key to recycling carbon dioxide, but they need to have just the right activity level—too much, and the wrong chemicals are produced; too little, and they don't do their job at all. Researchers using NERSC computers may have found a catalyst that’s just right. Read more.


ESnet Provides Help Transitioning to IPv6

Last September, Federal CIO Vivek Kundra issued a memo mandating that government agencies take aggressive steps to adopt IPv6, the next-generation Internet Protocol, which allows for a vastly increased address space. Having adopted IPv6 many years ago, ESnet is well positioned to provide help to others making the transition. But it’s important to get moving now. Read more.


CS Staff Contribute to SIAM Conference on CS&E

The SIAM Conference on Computational Science and Engineering is being held this week, February 28–March 4, in Reno, Nevada. As usual, many Berkeley Lab staff are contributing; their presentations are listed below (non-LBNL co-authors are omitted).

  • Andrew Canning: “An MPI/OpenMP Implementation of 3D FFTs for Plane Wave First Principles Materials Science Codes”
  • Sven Chilton and Phil Colella: “Damping of Spurious Reflections off of Coarse-fine Adaptive Mesh Refinement Grid Boundaries”
  • Alexander Chorin and Matthias Morzfe: “Implicit Particle Filter”
  • Phil Colella and Brian Van Straalen: “Progress on the Development of a Elliptic Operator Matrix Assembly Extension to the Chombo AMR Library”
  • Phil Colella and Dan Martin: “Higher Order Finite Volume Methods on Mapped Grids, with Application to Gyrokinetic Plasma Modeling”
  • Jim Demmel: “Video Background Subtraction Using Communication-Avoiding QR on a GPU”
  • Tony Drummond: “Using the Distributed Coupling Toolkit (DCT) to Couple Model Components of Generalized Curvilinear Environmental Model”
  • Ming Gu: “Fast Higher Order FD like Schemes with Sobolev Type Norm Minimization,” “Optimization,” and “Efficient Structured Solution of Large Sparse Linear Systems”
  • Hans Johansen: “Fourth-order Finite Volume Cut Cell Approach for Elliptic and Parabolic PDEs”
  • Sherry Li: “Parallel Algorithms for Hierarchically Semiseparable Structures”
  • Sherry Li, Ji Qiang, and Robert Ryne: “Evaluation of MPI/OpenMP Efficiency in Particle-in-Cell Beam Dynamics Simulation”
  • Kamesh Madduri: “Diagnosis, Tuning, and Redesign for Multicore” and “Optimizing Short-read Genome Assembly Algorithms for Emerging Multicore Platforms”
  • Osni Marques: “Overlapping Communication and Calculation with a PGAS Language”
  • Peter McCorquodale and Phillip Colella: “A High-order Finite-volume Method for Hyperbolic Conservation Laws on Locally-refined Grids”
  • Greg Miller: “Viscoelastic Flows with Microscopic Evaluation of Kramers Rod Forces”
  • Andy Nonaka, Ann Almgren, John Bell, and Mike Lijewski: “MAESTRO and CASTRO — Petascale AMR Codes for Astrophysical Applications”
  • Per-Olof Persson: “High Fidelity Simulations of Flapping Wings Designed for Energetically Optimal Flight”
  • Dovi Poznanski: “Real-time Classification of Astronomical Events with Python”
  • Chris Rycroft: “Computation of Three-dimensional Standing Water Waves”
  • John Shalf: “Comparing Performance and Energy Efficiency of Lightweight Manycore to GPU for LBM Methods”
  • David Trebotich: “Resolving Microfeatures in Oldroyd-B Flow behind a Sphere”
  • Lin-Wang Wang: “Large Scale Ab Initio Calculations for Photovoltaic Materials”
  • Martin White: “Emulating the Nonlinear Matter Power Spectrum for the Universe”
  • Ichitaro Yamazaki: “An Interpolatory Parallel Method for Large-scale Nonlinear Eigenvalue Problems”
  • Ichitaro Yamazaki, Sherry Li, and Esmond Ng: “Sparse Matrix Techniques in a Parallel Hybrid Solver for Large-scale Linear Systems”
  • Chao Yang, Stefano Marchesini, Filipe Maia, and Andre Schirotzek: “Sparse Matrix Techniques in X-ray Diffractive Imaging”
  • Yili Zheng: “Using UPC for Hybrid Programming”

Several Lab staff are also session organizers:

  • Hans Johansen and Phil Colella: “Higher-order Finite Volume Methods”
  • Sherri Li, Andrew Canning, and Osni Marques: “Hybrid Programming Models — Development and Deployment”
  • David Trebotich: “Computational Modeling of Multiscale Systems with Dynamic Constitutive Laws”
  • Ichitaro Yamazaki and Chao Yang: “Sparse Matrix Techniques in Large-Scale Scientific Computation”

Tapia Diversity in Computing Conference Coming to S.F. in April

The Richard Tapia Celebration of Diversity in Computing Conference will be held at the Fairmont Hotel in San Francisco this year, April 3–5, 2011, chaired by David Patterson. The conference is an opportunity to recruit from a diverse pool of excellent candidates and includes a doctoral consortium and poster sessions.

The Berkeley Lab Computing Sciences diversity working group has funding to pay registrations for Lab attendees to participate in the conference. If you are interested in participating in the conference and having your registration funded by the diversity working group, please send your name, your supervisor’s name, and a brief paragraph describing why you would like to attend to cs-diversity@george.lbl.gov. Send any questions to the same email address.


This Week’s Computing Sciences Seminars

Scientific Workflow Integration for Services Computing
Monday, February 28, 10:00–11:00 am, 50F-1647
Cui Lin, Valdosta State University, Georgia

As the demands of “data deluge” in today’s scientific research, scientific workflows have recently emerged as a new paradigm for scientists to integrate and orchestrate a wide range of analytical tools into complex scientific processes to accelerate scientific discoveries. A scientific workflow provides a formal specification of a scientific process, which represents and automates the steps from dataset selection and integration, computation and analysis, to final data presentation and visualization. A scientific workflow management system (SWFMS) is a system that supports the specification, scheduling and execution of scientific workflows in distributed and heterogeneous computing environments, such as high performance clusters, grid and cloud computing environments.

The talk provides an integrated solution to compose, schedule, execute and manage scientific workflows in a scientific workflow management system. To provide a foundation for developing a scientific workflow management system, our proposed reference architecture of SWFMS will be introduced. Such an architecture has been widely accepted by the scientific workflow community. To integrate heterogeneous services and applications into workflows, a task template model, a task run model, and their supporting languages, TSL and TRDL, will be introduced to provide an appropriate abstraction of heterogeneous services and application at both design time and run time. To schedule workflows in a cloud computing environment, we propose two workflow scheduling algorithms, the SHEFT algorithm and the SCPOR algorithm, to prioritize tasks in a workflow, map tasks onto suitable resources, and optimize the order of task execution on assigned compute resources. Our extensive experiments show that the proposed algorithms not only outperform other algorithms for data-intensive and compute-intensive workflows, but also allow the assigned resources to elastically change on demand of the scalability of workflows. Finally, our developed VIEW scientific workflow management system and a VIEW based workflow application system, the FiberFlow system, validate our architectures, models, languages and algorithm.

HPC Challenges for Reactor Physics Simulations
Friday, February 25, 11:00 am–12:00 pm, 50F-1647
Christophe Calvin, Commissariat à l’Energie Atomique (CEA), Saclay, France

The aim of this talk is to present some major HPC challenges for reactor physics simulations and how these challenges are addressed at CEA/DEN.

We can consider different targets of use for high performance computing in reactor physics. Depending on the target, different level and techniques can be used. Nevertheless, the different techniques will allow us to fall back on to higher level of simulation, like:

  • Parameterized calculations: this is the basic technique for optimization. HPC is a great opportunity to take into account more parameters and to reduce “time to market.” This allows too the use of optimization techniques, like neural networks, in order to find automatically and optimizes set of parameters.
  • High resolution physics: greater memory capacity and greater CPU power are required for more refined models in each discipline. For instance deterministic transport instead of few groups diffusion approach, …
  • A more realistic physics by using systematically real physical model instead of simplified model or pre-tabulated values. This implies not only greater CPU power, but robust and easy to use coupled system
  • Real time simulation: this already exists, but we can imagine improving modeling in order to obtain more realistic simulators and decrease the number of assumptions.

All these improvements are needed in order to increase margin by reducing uncertainties, to optimize designs, to improve safety, to optimize operating conditions and increase physics knowledge.

All these different levels of HPC will be illustrated on different kind of applications and parallel paradigms techniques in the frame of codes developed at CEA/DEN (APOLLO3, TRIPOLI-4, ...). Results obtained for fuel load management using genetic algorithm, domain decomposition for transport solvers, GPU acceleration for the Boltzmann equation solution are given using from few cores to massively parallel computing using more than 10 000 cores.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.