A-Z Index | Phone Book | Careers

InTheLoop | 01.31.2011

January 31, 2011

NERSC and CRD Contributed to Four of Ten “Insights of the Decade”

In the first decade of this millennium, rapid scientific progress—resulting from new sensing, imaging, computing, and networking tools—has transformed whole areas of research. The December 17, 2010 special issue of Science magazine highlights ten “Insights of the Decade” that have fundamentally changed our thinking about our bodies and our universe. Four of those ten insights were enabled in part by facilities and research in Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC) and Computational Research Division (CRD). Read more.
----------------------------------

A Time Machine for Understanding Earth’s Climate

Using supercomputers at NERSC, the 20th Century Reanalysis Project pieced together a new dataset for all global weather events from 1871 to the present day, providing an unprecedented record of past weather variability. The results were recently published in the Quarterly Journal of the Royal Meteorological Society. This dataset not only allows researchers to understand the long-term impact of extreme weather, but provides key historical comparisons for our own changing climate. Read more.
----------------------------------

NERSC Users Unlock the Secret of a Rechargeable Heat Battery

Using quantum mechanical calculations at NERSC, MIT’s Jeffrey Grossman found that the fulvalene diruthenium molecule undergoes a structural transformation when it absorbs sunlight, putting the molecule into a charged state where it can remain stable indefinitely. When triggered by a catalyst, the molecule snaps back to its original shape, releasing heat in the process. This finding could help researchers produce a battery that repeatedly stores and releases heat gathered from sunlight. Read more.
----------------------------------

NERSC Makes UC Berkeley Parallel Computing Class Available Online

The UC Berkeley graduate-level course “Applications of Parallel Computers,” co-taught by LBNL Faculty Scientist Jim Demmel and NERSC Director Kathy Yelick, who both have faculty appointments at UC Berkeley, has been made available online for NERSC users to audit. The course is broadcast live over the Internet Tuesday and Thursday mornings from 9:30 to 11:00 PST. Lectures and class materials are also archived, and NERSC has set up a web page where users and staff can discuss the material.
----------------------------------

CS Staff Edit, Contribute to Special Issue of Scientific Programming

Alice Koniges of NERSC and Gabriele Jost of the Texas Advanced Computing Center are guest editors of the January 21, 2011 special issue of the journal Scientific Programming, titled “Exploring Languages for Expressing Medium to Massive On-Chip Parallelism.” Several CRD and NERSC researchers contributed articles, including:

  • Alice Koniges: “Guest Editorial: Special Issue: Exploring languages for expressing medium to massive on-chip parallelism” (with Gabriele Jost)
  • Robert Preissl and Alice Koniges: “Overlapping communication with computation using OpenMP tasks on the GTS magnetic fusion code” (with Stephan Ethier, Weixing Wang and Nathan Wichmann)
  • Hongzhang Shan, Filip Blagojević, Seung-Jai Min, Paul Hargrove, Alice Koniges and Nicholas J. Wright: “A programming model performance study using the NAS parallel benchmarks” (with Haoqiang Jin and Karl Fuerlinger)
  • Yili Zheng: “Optimizing UPC programs for multi-core systems”

----------------------------------

Berkeley Lab Staff to Share Expertise at Networking Meeting

Twice a year, experts from the leading government and academic networks and research centers meet to discuss the latest technologies, tools and trends at the Joint Techs Meeting. Cosponsored by ESnet and Internet2, the Winter 2011 Joint Techs meeting is being held this week (Jan. 30–Feb. 2) at Clemson University in South Carolina.

At the meeting, Lawrence Berkeley National Laboratory staff will make a number of presentations. Berkeley Lab talks include:

  • “Automated GOLEs and Fenius: Pragmatic Interoperability” by Evangelos Chaniotakas, ESnet
  • “Green Ethernet” by Mike Bennett, IT Division
  • “3D Modeling and Visualization of Real Time Security Events” by Dan Klinedinst, IT Division
  • “The Science DMZ — A well-known location for high-performance network services” by Eli Dart, ESnet
  • “End-to-End Data Transfer Performance with Periscope and NetLogger” by Dan Gunter, Advanced Computing for Science Department
  • “ESnet Update” by Steve Cotter, ESnet

Ted Sopher of the IT Division is co-chair of the Emerging Technologies focus area.

The ESnet Site Coordinating Committee (ESCC) winter meeting will follow Joint Techs on Feb. 2–3, including the following presentations:

  • Greg Bell will discuss the Site Cloud/Remote Services template
  • Steve Cotter will present an ESnet update
  • Brian Tierney will discuss ANI status and opportunities
  • Steve Cotter will discuss site planning for the upcoming 100G production backbone
  • Eli Dart will talk about science drivers for ESnet planning and the Science DMZ
  • Kevin Oberman will participate in a focus section on IP address management issues, emphasizing IPv6
  • Joe Metzger will discuss DICE monitoring directions

----------------------------------

The Tale of Little Big Iron and the Three Skinny Guys

Wes Bethel and Hank Childs of Berkeley Lab, along with colleagues from five other institutions, have published a paper in the January/February issue of IEEE Computer Graphics and Applications titled “Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys.” It’s already attracting attention, including mentions in insideHPC and VizWorld. Here is the abstract:

Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources—the “Big Iron.” Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it’s natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn’t receive the same level of treatment as that of the Big Iron.

This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be—that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

----------------------------------

This Week’s Computing Sciences Seminars

HPC Seminar
Tuesday, February 1, 8:00–9:30 pm, OSF 943-238
Brian Austin, NERSC Petascale Postdoc

No abstract available.

LAPACK Seminar: Fast Katz and Commuters — Quadrature Rules and Sparse Linear Solvers for Link Prediction Heuristics
Wednesday, February 2, 11:10 am–12:00 pm, Soda Hall, UC Berkeley
David Gleich, Sandia National Laboratories

Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches approximate all pairwise relationships simultaneously. We are interested in computing the score for a single pair of nodes: the top-k nodes with the best scores from a given source node.

For the pairwise problem, we introduce an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule.

For the top-k problem, we propose an algorithm that only accesses a small portion of the graph, similar to algorithms used in personalized PageRank computing. To test scalability and accuracy, we experiment with three real-world networks and find that our algorithms run in milliseconds to seconds without any preprocessing.

Cluster Cosmology: Opportunities and Challenges
Thursday, February 3, 12:00–1:00 pm, 50F-1647
Zarija Lukic, Los Alamos National Laboratory

As the most massive objects in the Universe, and likely the end point in the hierarchical structure formation, galaxy clusters offer unique insight into cosmological growth of structure. Several observational campaigns, including X-ray, optical and microwave telescopes have either just started to collect data on clusters, or will start in the near future. From the theory side, there are two major roadblocks for doing cluster cosmology: predicting abundance of clusters for different cosmological models, and finding a precise way to measure cluster masses from a given observable. In this talk, I will present our latest results on the mass function from numerical simulations, outlying some future challenges. I will also present a method for classifying clusters as “relaxed” or “merging” in an observationally applicable way, and how their fraction can be used for cluster cosmology. I will also show how this classification improves the quality of mass-observable relations. Finally, throughout this talk, I will showcase the application of cluster cosmology on two kinds of cluster surveys (X-ray and SZ) and putting constraints on early dark energy models.

Integration of Lagrangian Sensor Data into Hydrodynamics Models
Friday, February 4, 11:00 am–12:00 pm, 50B-4205
Alexandre Bayen, UC Berkeley

Sensor packages that move within the system they measure pose new challenges for researchers. In the hydrodynamics community, sensors which float freely in a fluid environment are called “Lagrangian drifters.” The integration of such drifter data into hydrodynamic models can be achieved with the proper use of inverse modeling algorithms, in particular Kalman filtering and its extensions for nonlinear systems. The Saint-Venant equations, or Shallow Water equations, are a PDE model of fluid flow that is well suited to rivers, estuaries, and other domains where drifters may be deployed. In the present work, we use Lagrangian drifter data to perform data assimilation in Shallow Water Models, and approaching these equations with different assumptions leads to varying degrees of computational complexity. Linearizations about uniform or non-uniform flows will be presented as a basis for data assimilation relying on variational techniques (quadratic programming based) and sequential methods (ensemble Kalman filtering).

Numerical simulation results will be presented, as well as preliminary results from a study conducted at the Hydraulic Research Unit in Stillwater, Oklahoma as part of the Department of Homeland Security’s Rapid Repair of Levee Breaches program. Results from deployments of a drifter fleet in the Sacramento San Joaquin Delta will be presented. Finally, preliminary implementations of these data assimilation techniques on LBNL’s model REALM will be discussed.

----------------------------------

Link of the Week: Supercomputing and Competitiveness

A study out of the University of Arkansas shows that consistent investment in high performance computing (HPC) results in greater research competitiveness for U.S. colleges and universities. Among the variables the researchers used to determine the impact of funding on competitiveness were TOP500 rankings, institutional data from the Carnegie Foundation, information from U.S. News and World Report, as well as the sum of published articles, the sum of funding from the National Science Foundation, and the sum of federal funding.

Amy Apon, director of the Arkansas High Performance Computing Center, commented:

Overall, our models indicated that investment in high-performance computing is a good predictor of research competitiveness at U.S. academic institutions. Even at modest levels, such investments, if consistent from year to year, strongly correlate to new NSF funding for science and engineering research, which in turn leads to more published articles.

While researchers found a clear correlation between investment in supercomputing resources and research competitiveness, they cautioned that such investment needs to be ongoing. Federal funding and published articles begin to decline after two years if initial investments are not renewed, they noted.
----------------------------------

NERSC and CRD Contributed to Four of Ten “Insights of the Decade”

In the first decade of this millennium, rapid scientific progress—resulting from new sensing, imaging, computing, and networking tools—has transformed whole areas of research. The December 17, 2010 special issue of Science magazine highlights ten “Insights of the Decade” that have fundamentally changed our thinking about our bodies and our universe. Four of those ten insights were enabled in part by facilities and research in Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC) and Computational Research Division (CRD). Read more.


A Time Machine for Understanding Earth’s Climate

Using supercomputers at NERSC, the 20th Century Reanalysis Project pieced together a new dataset for all global weather events from 1871 to the present day, providing an unprecedented record of past weather variability. The results were recently published in the Quarterly Journal of the Royal Meteorological Society. This dataset not only allows researchers to understand the long-term impact of extreme weather, but provides key historical comparisons for our own changing climate. Read more.


NERSC Users Unlock the Secret of a Rechargeable Heat Battery

Using quantum mechanical calculations at NERSC, MIT’s Jeffrey Grossman found that the fulvalene diruthenium molecule undergoes a structural transformation when it absorbs sunlight, putting the molecule into a charged state where it can remain stable indefinitely. When triggered by a catalyst, the molecule snaps back to its original shape, releasing heat in the process. This finding could help researchers produce a battery that repeatedly stores and releases heat gathered from sunlight. Read more.


NERSC Makes UC Berkeley Parallel Computing Class Available Online

The UC Berkeley graduate-level course “Applications of Parallel Computers,” co-taught by LBNL Faculty Scientist Jim Demmel and NERSC Director Kathy Yelick, who both have faculty appointments at UC Berkeley, has been made available online for NERSC users to audit. The course is broadcast live over the Internet Tuesday and Thursday mornings from 9:30 to 11:00 PST. Lectures and class materials are also archived, and NERSC has set up a web page where users and staff can discuss the material.


CS Staff Edit, Contribute to Special Issue of Scientific Programming

Alice Koniges of NERSC and Gabriele Jost of the Texas Advanced Computing Center are guest editors of the January 21, 2011 special issue of the journal Scientific Programming, titled “Exploring Languages for Expressing Medium to Massive On-Chip Parallelism.” Several CRD and NERSC researchers contributed articles, including:

  • Alice Koniges: “Guest Editorial: Special Issue: Exploring languages for expressing medium to massive on-chip parallelism” (with Gabriele Jost)
  • Robert Preissl and Alice Koniges: “Overlapping communication with computation using OpenMP tasks on the GTS magnetic fusion code” (with Stephan Ethier, Weixing Wang and Nathan Wichmann)
  • Hongzhang Shan, Filip Blagojević, Seung-Jai Min, Paul Hargrove, Alice Koniges and Nicholas J. Wright: “A programming model performance study using the NAS parallel benchmarks” (with Haoqiang Jin and Karl Fuerlinger)
  • Yili Zheng: “Optimizing UPC programs for multi-core systems”

Berkeley Lab Staff to Share Expertise at Networking Meeting

Twice a year, experts from the leading government and academic networks and research centers meet to discuss the latest technologies, tools and trends at the Joint Techs Meeting. Cosponsored by ESnet and Internet2, the Winter 2011 Joint Techs meeting is being held this week (Jan. 30–Feb. 2) at Clemson University in South Carolina.

At the meeting, Lawrence Berkeley National Laboratory staff will make a number of presentations. Berkeley Lab talks include:

  • “Automated GOLEs and Fenius: Pragmatic Interoperability” by Evangelos Chaniotakas, ESnet
  • “Green Ethernet” by Mike Bennett, IT Division
  • “3D Modeling and Visualization of Real Time Security Events” by Dan Klinedinst, IT Division
  • “The Science DMZ — A well-known location for high-performance network services” by Eli Dart, ESnet
  • “End-to-End Data Transfer Performance with Periscope and NetLogger” by Dan Gunter, Advanced Computing for Science Department
  • “ESnet Update” by Steve Cotter, ESnet

Ted Sopher of the IT Division is co-chair of the Emerging Technologies focus area.

The ESnet Site Coordinating Committee (ESCC) winter meeting will follow Joint Techs on Feb. 2–3, including the following presentations:

  • Greg Bell will discuss the Site Cloud/Remote Services template
  • Steve Cotter will present an ESnet update
  • Brian Tierney will discuss ANI status and opportunities
  • Steve Cotter will discuss site planning for the upcoming 100G production backbone
  • Eli Dart will talk about science drivers for ESnet planning and the Science DMZ
  • Kevin Oberman will participate in a focus section on IP address management issues, emphasizing IPv6
  • Joe Metzger will discuss DICE monitoring directions

The Tale of Little Big Iron and the Three Skinny Guys

Wes Bethel and Hank Childs of Berkeley Lab, along with colleagues from five other institutions, have published a paper in the January/February issue of IEEE Computer Graphics and Applications titled “Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys.” It’s already attracting attention, including mentions in insideHPC and VizWorld. Here is the abstract:

Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources—the “Big Iron.” Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it’s natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn’t receive the same level of treatment as that of the Big Iron.

This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be—that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?


This Week’s Computing Sciences Seminars

HPC Seminar
Tuesday, February 1, 8:00–9:30 pm, OSF 943-238
Brian Austin, NERSC Petascale Postdoc

No abstract available.

LAPACK Seminar: Fast Katz and Commuters — Quadrature Rules and Sparse Linear Solvers for Link Prediction Heuristics
Wednesday, February 2, 11:10 am–12:00 pm, Soda Hall, UC Berkeley
David Gleich, Sandia National Laboratories

Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches approximate all pairwise relationships simultaneously. We are interested in computing the score for a single pair of nodes: the top-k nodes with the best scores from a given source node.

For the pairwise problem, we introduce an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule.

For the top-k problem, we propose an algorithm that only accesses a small portion of the graph, similar to algorithms used in personalized PageRank computing. To test scalability and accuracy, we experiment with three real-world networks and find that our algorithms run in milliseconds to seconds without any preprocessing.

Cluster Cosmology: Opportunities and Challenges
Thursday, February 3, 12:00–1:00 pm, 50F-1647
Zarija Lukic, Los Alamos National Laboratory

As the most massive objects in the Universe, and likely the end point in the hierarchical structure formation, galaxy clusters offer unique insight into cosmological growth of structure. Several observational campaigns, including X-ray, optical and microwave telescopes have either just started to collect data on clusters, or will start in the near future. From the theory side, there are two major roadblocks for doing cluster cosmology: predicting abundance of clusters for different cosmological models, and finding a precise way to measure cluster masses from a given observable. In this talk, I will present our latest results on the mass function from numerical simulations, outlying some future challenges. I will also present a method for classifying clusters as “relaxed” or “merging” in an observationally applicable way, and how their fraction can be used for cluster cosmology. I will also show how this classification improves the quality of mass-observable relations. Finally, throughout this talk, I will showcase the application of cluster cosmology on two kinds of cluster surveys (X-ray and SZ) and putting constraints on early dark energy models.

Integration of Lagrangian Sensor Data into Hydrodynamics Models
Friday, February 4, 11:00 am–12:00 pm, 50B-4205
Alexandre Bayen, UC Berkeley

Sensor packages that move within the system they measure pose new challenges for researchers. In the hydrodynamics community, sensors which float freely in a fluid environment are called “Lagrangian drifters.” The integration of such drifter data into hydrodynamic models can be achieved with the proper use of inverse modeling algorithms, in particular Kalman filtering and its extensions for nonlinear systems. The Saint-Venant equations, or Shallow Water equations, are a PDE model of fluid flow that is well suited to rivers, estuaries, and other domains where drifters may be deployed. In the present work, we use Lagrangian drifter data to perform data assimilation in Shallow Water Models, and approaching these equations with different assumptions leads to varying degrees of computational complexity. Linearizations about uniform or non-uniform flows will be presented as a basis for data assimilation relying on variational techniques (quadratic programming based) and sequential methods (ensemble Kalman filtering).

Numerical simulation results will be presented, as well as preliminary results from a study conducted at the Hydraulic Research Unit in Stillwater, Oklahoma as part of the Department of Homeland Security’s Rapid Repair of Levee Breaches program. Results from deployments of a drifter fleet in the Sacramento San Joaquin Delta will be presented. Finally, preliminary implementations of these data assimilation techniques on LBNL’s model REALM will be discussed.


Link of the Week: Supercomputing and Competitiveness

A study out of the University of Arkansas shows that consistent investment in high performance computing (HPC) results in greater research competitiveness for U.S. colleges and universities. Among the variables the researchers used to determine the impact of funding on competitiveness were TOP500 rankings, institutional data from the Carnegie Foundation, information from U.S. News and World Report, as well as the sum of published articles, the sum of funding from the National Science Foundation, and the sum of federal funding.

Amy Apon, director of the Arkansas High Performance Computing Center, commented:

Overall, our models indicated that investment in high-performance computing is a good predictor of research competitiveness at U.S. academic institutions. Even at modest levels, such investments, if consistent from year to year, strongly correlate to new NSF funding for science and engineering research, which in turn leads to more published articles.

While researchers found a clear correlation between investment in supercomputing resources and research competitiveness, they cautioned that such investment needs to be ongoing. Federal funding and published articles begin to decline after two years if initial investments are not renewed, they noted.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.