A-Z Index | Phone Book | Careers

InTheLoop | 01.18.2011

January 18, 2011

Planck Mission Peels Back Layers of Universe

The Planck mission—a a European Space Agency spacecraft, with significant contributions from NASA, which is designed to measure the cosmic microwave background—has released a new catalog of data from its initial maps of the entire sky. The catalogue includes everything from thousands of never-before-seen dusty cocoons where stars are forming, to some of the most massive clusters of galaxies.

The U.S. Planck collaboration performed its data analysis for this release at NASA’s Infrared Processing and Analysis Center (IPAC) and at NERSC, where the team has both a dedicated cluster and a significant allocation of time on the NERSC supercomputers. Read more.


Cray XE6 Training February 7-8 at NERSC

You are invited to register for a training class on February 7-8 focused on the new Hopper Cray XE6 system, presented by NERSC, Los Alamos National Laboratory, and Cray, Inc. The training will be held at the Oakland Scientific Facility and broadcast as a webinar.

The workshop is aimed at both new and intermediate users of the Cray X line of supercomputers. It is intended for those who already have some High Performance Computing experience. The ability to use Linux, Fortran, C, and/or C++, and exposure to parallel programming concepts using the Message Passing Interface (MPI) is expected. Participation is not restricted to NERSC users.

Day 1 will be an introduction to using modern supercomputers with an emphasis on the Cray XE6, including an introduction to thread programming with OpenMP. Day 2 will present intermediate instruction on effective and efficient use of the Cray XE6. Register and view the agenda here. Space for this event is limited, but if there is sufficient demand it will be repeated.


ESnet Seeks Strategic Partnerships and Outreach Coordinator

ESnet has an immediate opening for a Strategic Partnerships and Outreach Coordinator. This person will take the lead on promoting the diffusion of ESnet resources to new and existing users as well as arranging outreach and trainings. Specific responsibilities include developing a comprehensive strategy for assessing customer requirements and improving customer-facing tools and services. See job details. The Lab’s Employee Referral Incentive Program (ERIP) awards $1,000 (net) to employees whose referral of an external candidate leads to a successful hire.


Revolution: The First 2000 Years of Computing

The Computer History Museum in Mountain View, CA, home to the world’s largest information technology collection, unveiled a 21st century makeover last week when it reopened its newly renovated building, with more than double the exhibition space, added research and education components, and a vast new digital platform.

The Museum’s reopening marks the successful completion of a two-year, $19 million renovation of the former Silicon Graphics, Inc. (SGI) headquarters that will bring to life the remarkable story of the birth, growth, and future of computing for hundreds of thousands of visitors annually. The centerpiece of the Museum’s makeover is “Revolution: The First 2000 Years of Computing,” a sweeping, modern exhibition designed to look at every major aspect of the extraordinary history of computing—from the abacus to the smart phone, and the vital progress in between. Read more.


This Week’s Computing Sciences Seminars

Moving Grids: Space Conservation Law and Multigrid Methods
Tuesday, January 18, 10:00–11:00 am, 2-100B
Srinivas Yarlanki, IBM T. J. Watson Research Center

It is well known that an additional conservation equation, namely the space conservation law (SCL) or geometric conservation law (GCL), has to be satisfied on moving grids, otherwise erroneous solutions are obtained and numerical instability may occur. It will be shown that SCL is just one constraint, out of a set of additional constraints, which have to be satisfied on moving grids. Even though SCL is satisfied, by neglecting other constraints of this set, errors in the form of artificial sources/sinks may still accumulate on moving grids, leading to erroneous solutions.

In a second part of this talk, I will discuss a modification of geometric multigrid technique for application to moving grids. While multigrid techniques have been applied to moving grids, they have been done when the grid movement was know a priori and/or calculated explicitly. When the grid movement is not known a priori and is calculated implicitly as a part of the solution, application of the standard multigrid algorithm on moving grids may lead to numerical instabilities. I will show how slight modifications of the standard algorithm can lead to convergence acceleration on moving grids when the grid movement is not known a priori and is calculated implicitly.

The Materials Genome Project: Using High-Throughput Computing to Design New Materials for Clean Energy
Tuesday, January 18, 11:00 am–12:00 pm, 70-191
Anubhav Jain, Massachusetts Institute of Technology

New materials can help to overcome several key challenges in the renewable energy field, such as realizing clean transportation through better storage or achieving cost-effective energy generation through solar photovoltaics. However, materials development has traditionally been a painstaking process that relies heavily on exhaustive experiments and researcher intuition. In this presentation, I will describe how it is possible to canvas large chemical spaces for interesting materials computationally so that experimental studies are focused on the most promising materials. Such computational screening is achieved by running density functional theory (DFT) calculations, which model fundamental materials properties from quantum mechanics, in a high-throughput mode. In a time span of about three years, we have completed DFT calculations on over 60,000 compounds, which is on the scale of the number of inorganic materials with known crystal structures (about 100,000).

In the area of lithium-ion battery cathodes, we have discovered many interesting new materials using these high-throughput DFT calculations. Several of these compounds have subsequently been synthesized and are currently being experimentally optimized. One example — a novel class of lithium metal carbonophosphate compounds — has the potential to exhibit improved energy density as compared to current cathode materials while retaining high safety.

Moving beyond batteries, I will present how such data can be mined for other applications, such as the screening of materials for Hg capture from coal gasification streams. One challenge for the future is the automated calculation of excited state properties, for which DFT often predicts qualitatively incorrect behavior. I will demonstrate initial work in this area towards the screening of novel materials for photovoltaic applications.

Looking ahead, I will present opportunities for improved algorithms and data infrastructure to expand the quantity and quality of data available with this technique, thereby allowing more material classes and potential applications to be investigated. We hope to incorporate such advancements into a public version of this database that provides high-quality computational data on all known inorganic compounds. Such a “Materials Genome” could help researchers better understand, classify, or discover materials across a variety of applications.

Numerical Simulations of Type Ia Supernovae
Wednesday, January 19, 10:00–11:00 am, 50F-1647
Andrew Nonaka, Center for Computational Sciences and Engineering (CCSE), LBNL

Type Ia supernovae are of great interest to the astrophysical community for their use as “standard candles” for mapping the observable universe. I will discuss our work in Type Ia supernovae simulations, starting from one-dimensional white dwarf models through the explosion phase using three-dimensional, finite-volume adaptive mesh refinement algorithms. Particular emphasis will be on our low Mach number code, MAESTRO, which is suitable for simulating the final hours of convection preceding ignition. I will describe the developmental history of MAESTRO, beginning from a first-principles mathematical model up through the resulting, massively parallel implementation for full star simulations. Results will be presented summarizing the first ever full-star simulations of the convective period preceding ignition. Finally, I will discuss the implications of mapping data from MAESTRO into a fully compressible framework to continue stellar evolution past ignition.

Numerical Simulation of Magnetic Reconnection in Dynamically Adaptive Grids via NKS
Wednesday, January 19, 11:00 am–12:00 pm, 50B-2222
Xuefei Yuan (Alvarez Fellowship Candidate), Columbia University

Numerical simulations of the four-field extended magnetohydrodynamics (MHD) equations with hyper-resistivity terms present a difficult challenge because of demanding spatial resolution requirements. An r-type, variational grid generator based on an equidistribution principle defines a new grid at each time step by solving a single Monge-Ampère (MA) equation, adaptive to the corresponding current density. This time-dependent sequence of grid coordinates defines a grid velocity that enters the equation set, thereby avoiding the need for a separate rezone step. The MHD equations are transformed from Cartesian coordinates so that the solution-defined curvilinear coordinates replace Cartesian coordinates as the independent variables. The application of an implicit scheme to the time-dependent problem prevents the time step size from being restricted by a Courant-Friedrichs-Lewy (CFL) condition, but only by accuracy. The Newton-Krylov-Schwarz (NKS) algorithm is used to solve above systems at each time iteration in parallel. Convergence studies show that curvilinear solutions converge faster than Cartesian solutions, and accuracy studies show that curvilinear solutions can achieve the same accuracy as Cartesian solutions with fewer grid points.

Automating Topology Aware Mapping for Supercomputers
Thursday, January 20, 10:00–11:00 am, 50F-1647
Abhinav Bhatele (Alvarez Fellowship Candidate), University of Illinois at Urbana-Champaign

Petascale machines with hundreds of thousands of cores are being built. These machines have varying interconnect topologies and large network diameters. Computation is cheap and communication on the network is becoming the bottleneck for scaling of parallel applications. Network contention, specifically, is becoming an increasingly important factor affecting overall performance. Most parallel applications have a certain communication topology. Mapping of tasks in a parallel application based on their communication graph, to the physical processors on a machine can potentially lead to performance improvements. Placement of communicating tasks on nearby physical processors can minimize the distance traveled by messages and reduce the chances of contention.

Performance improvements through topology aware placement for applications such as NAMD and OpenAtom are used to motivate this work. Building on these ideas, I will present algorithms and techniques for automatic mapping of parallel applications to relieve the application developers of this burden. The hop-bytes metric is proposed for the evaluation of mapping algorithms as a better metric than the previously used maximum dilation metric. The main focus of this work is on developing topology aware mapping algorithms for parallel applications with regular and irregular communication patterns. The automatic mapping framework is a suite of such algorithms with capabilities to choose the best mapping for a problem with a given communication graph.

Mathematical Studies of Electronic Structure Theory
Friday, January 21, 10:00–11:00 am, 50B-4205
Lin Lin (Alvarez Fellowship Candidate), Princeton University

Electronic structure theory describes the distribution of electrons in molecules and in solids. Among different formalisms in electronic structure theory, the Kohn-Sham density functional theory (KSDFT) is by far the most widely used and practical approach that achieves the best compromise between accuracy and efficiency. The computational complexity of a standard implementation of KSDFT is O(N3), where N is the number of electrons. The cubic scaling with respect to N limits the application of KSDFT to systems with at most tens of thousands of atoms. Reducing the computational complexity of KSDFT requires an in-depth study of the mathematical structure of the KSDFT. In this talk I will present novel techniques that allow us to reduce the O(N3) scaling to O(N1.5) for two-dimensional surface systems and O(N2) scaling for three-dimensional bulk systems. These techniques are quite general from a numerical analysis point of view. They have applications beyond the study of electronic structure theory.


Link of the Week: Philosophy In The Flesh

“We are neural beings,” states UC Berkeley cognitive scientist George Lakoff. “Our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything—only what our embodied brains permit.”

His book Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, coauthored by Mark Johnson, emphasizes the following points: “The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.” In an Edge interview, Lakoff comments on the body as the source of mathematics:

[T]he ordinary embodied mind, with its image schemas, conceptual metaphors, and mental spaces, has the capacity to create the most sophisticated of mathematics via using everyday conceptual mechanisms….

What we conclude is that mathematics as we know it is a product of the human body and brain; it is not part of the objective structure of the universe—this or any other…. The explanation of why mathematics “works so well” is simple: it is the result of tens of thousands of very smart people observing the world carefully and adapting or creating mathematics to fit their observations….

We, with our physical bodies and brains, are the source of reason, the source of mathematics, the source of ideas…. That makes each embodied human being (the only kind) infinitely valuable—a source not a vessel. It makes bodies infinitely valuable—the source of all concepts, reason, and mathematics.

For two millennia, we have been progressively devaluing human life by underestimating the value of human bodies. We can hope that the next millennium, in which the embodiment of mind will come to be fully appreciated, will be more humanistic.

The Berkeley Lab Philosophy Club will discuss chapter 25 of Philosophy in the Flesh from noon to 1:00 pm on Friday, January 21, in the Perseverance Hall addition. (For a copy of the chapter, contact Peter McCorquodale. Two days later, George Lakoff will be speaking about “The Extended Mind” at a live taping of Philosophy Talk at 3 p.m. on Sunday, January 23, at the Marsh in downtown Berkeley. Prof. Lakoff will be interviewed by Stanford University philosophers John Perry and Ken Taylor, and audience members will have the opportunity to ask questions.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.