InTheLoop | 09.09.2013
ASCR PM Lucy Nowell at LBNL this Week, Giving Talk on Writing Good Proposals
Lucy Nowell, the Data and Visualization Program Manager in DOE’s Office of Advanced Scientific Computing Research will be at Berkeley Lab Monday and Tuesday for a series of meetings with CS researchers. She manages a broad spectrum of ASCR-funded computer science research, with a particular emphasis on scientific data management and analysis.
As part of her visit, Nowell will give a talk on “How To Write A Good Proposal” at 11 a.m. Tuesday, September 10, in Bldg. 50B, Room 4205.
Here’s the abstract for her presentation: “Across the federal government, more and more people are writing research proposals and success rates are shrinking. What should you know before you write a proposal to maximize your chances of success? How can you structure your proposal to increase the probability that reviewers will respond favorably? What is the role of a Program Manager (PM) in this process and how should you relate to your own PM when you win an award? Lucy will share her insights about these and related issues, based on nearly a decade of program management experience at multiple federal agencies.”
NERSC Calculations Independently Confirm Global Land Warming Since 1901
In making the case for global climate warming, researchers rely on long-term measurements of air temperature taken around the world, all at a height of two meters or six-and-a-half-feet. However, some climate researchers and others point to those same measurements as an inconsistent record, and therefore as unreliable indicators of actual temperature variability and long-term trends. But in a paper published recently in Geophysical Research Letters, a team of scientists led by Gil Compo of the University of Colorado and the National Oceanic and Atmospheric Administration’s (NOAA) Earth System Research Laboratory in Boulder, Colorado report that land surface air temperatures estimated using a number of other historical observations, including barometric pressure, but not directly using the land temperature measurements themselves, nevertheless largely match the existing temperature measurements spanning the years 1901 to 2010. The findings were based on Compo’s work known as the 20th Century Reanalysis Project at NERSC.
“This is really the essence of science,” Compo said. “There is knowledge ‘A’ from one source and you think, ‘Can I get to that same knowledge from a completely different source?’ Since we had already produced the dataset, we looked at just how close our temperatures estimated using barometers were to the temperatures using thermometers.” »Read more
Blast from the Past: Glimpses from NERSC’s 39 Years of HPC Leadership
Since it was established in 1974, the supercomputing center now known as NERSC has achieved a number of milestones, in addition to being the nation’s first computing center for open science. As part of the DOE's September communications focus on scientific supercomputing, we present a sampling of historical photos and factoids being shared each of the five Thursdays of this month via social media channels. To see the images and blurbs, go to: http://cs.lbl.gov/news-media/news/2013/nostalgia-and-fun-facts/
This Week’s Computing Sciences Seminars
“Wrinkles on Everest: Persistence and Stability in an Omniscalar World”
Wednesday, September 11, 3:30 - 4:30 p.m., 939 Evans Hall - UC Berkeley Campus
Dmitriy Morozov, Visualization Group, Computational Research Division
Abstract: In the last decade, persistent homology emerged as a particularly active topic within the young field of computational topology. Homology is a topological invariant that counts the number of cycles in the space: components, loops, voids, and their higher-dimensional analogs. Persistence keeps track of the evolution of such cycles and quantifies their longevity. By encoding physical phenomena as real-valued functions, one can use persistence to identify their significant features.
This talk is an introduction to the subject, discussing the settings in which persistence is effective as well as the methods it employs. It will touch on the topics of homology inference, dimensionality reduction, and general models of noise. The last part of the talk will describe our recent efforts to parallelize computation of merge trees, a descriptor closely related to 0-dimensional persistence.
Floating-Point Tricks to Solve Boundary-Value Problems Faster
Wednesday, September 11, 2013, 11:00am - 12:00pm
380 Soda Hall - UC Berkeley Campus
Prof. William Kahan, Electrical Engineering and Computer Sciences (EECS), University of California, Berkeley
Abstract: Some old tricks are resuscitated to accelerate the numerical solution of certain discretized boundary-value problems. Without the tricks, half the digits carried by the arithmetic can be lost to roundoff when the discretization's grid-gaps get very small. The tricks can procure adequate accuracy from arithmetic with "float" variables 4-bytes wide instead of "double" variables 8-bytes wide that move slower through the computer's memory system and pipelines. Tricks are tricky for programs written in MATLAB 7+, JAVA, FORTRAN and post-1985 ANSI C. For the original Kernighan-Ritchie C of the late 1970s, and for a few implementations of C99 that fully support IEEE Standard 754 for Binary Floating-Point, the tricks are easy or unnecessary. Examples show how well the tricks work. »More
Parallel Geometric Multigrid Method for Large-scale Simulations of 3D Groundwater Flow Through Heterogeneous Porous Media
Friday, September 13, 2013, 11:00am - 12:00pm
Bldg. 50B, Room 2222
Kengo Nakajima, Information Technology Center, The University of Tokyo
Abstract: The multigrid method used with OpenMP/MPI hybrid parallel programming models is expected to play an important role in large-scale scientific computing on post-peta/exa-scale supercomputer systems. In the present work, the effect of sparse matrix storage formats on the performance of parallel geometric multigrid solvers was evaluated, and a new data structure for the Ellpack-Itpack (ELL) format is proposed. The proposed method is implemented for pGW3D-FVM, a parallel code for 3D groundwater flow simulations using the multigrid method, and the robustness and performance of the code was evaluated on up to 4,096 nodes (65,536 cores) of the Fujistu FX10 supercomputer system at the University of Tokyo. The parallel multigrid solver using the ELL format with Coarse Grid Aggregation (CGA) provided excellent performance improvement in both weak scaling (13%-35%) and strong scaling (40%-70%) compared to the original code using the CRS format. Finally, hierarchical CGA (hCGA) method is proposed, and preliminary results are presented.