A-Z Index | Phone Book | Careers

InTheLoop | 12.10.2012

December 10, 2012

John Shalf Is Named Chief Technology Officer for NERSC

John Shalf has been named the Chief Technology Officer (CTO) of the National Energy Research Scientific Computing (NERSC) Division at Lawrence Berkeley National Laboratory (Berkeley Lab) by NERSC Director Sudip Dosanjh. Shalf will also continue to serve in his current role as head of the Computer and Data Sciences Department in Berkeley Lab’s Computational Research Division (CRD). As Chief Technology Officer, Shalf will help NERSC develop a plan to achieve exascale performance—1 quintillion or 1018 computer operations per second—a thousand-fold increase over current petascale systems. Read more.


CRD Researchers Apply Novel Tools and a Pharmaceutical Screening Strategy to Capture Carbon Dioxide

Today, crystalline porous materials, like zeolites and metal organic frameworks (MOFs), are widely used in industry to purify water and separate gases, among other things. But scientists believe that these structures have the potential to do a lot more—like capturing carbon dioxide (CO2) from the flue-gas emissions of coal-burning power plants before it reaches the atmosphere and contributes to global warming. Until recently, one of the major challenges to using these materials for CO2 capture was identifying the right porous structures to effectively do the job. But novel tools developed by Computational Research Division researchers, combined with an informatics screening strategy inspired by the pharmaceutical industry, is making this search a lot easier. Read more.


ESnet to Discuss Innovative Projects at LHC Meetings This Week at CERN

At a series of meetings this week focusing on computing and networking in support of research on the Large Hadron Collider (LHC) at CERN, ESnet staff members Chin Guok and Inder Monga will discuss how ESnet is developing and deploying network services to support large-scale science.

At the two-day ATLAS Distributed Computing Tier-1/Tier-2/Tier-3 Jamboree, Chin Guok will discuss “Intelligent Network Services.” In his invited talk on Tuesday, Dec. 11, Guok will provide an overview of scientific workflows, their requirements, and existing (and conceptual) technologies that can meet these requirements.

On Wednesday, Dec. 12, Inder Monga will discuss “Software Defined Networking for Big-Data Science” at the Worldwide LHC Computing Grid (WLCG) Grid Deployment Board meeting.

On Thursday, Dec. 13, Monga will attend the LHCONE Point-to-Point Service Workshop along with Guok, ESnet Director Greg Bell and ESnet Senior Advisor Bill Johnston. Monga will give a talk on "Introduction to Bandwidth on Demand concepts.”

In conjunction with his visit to CERN, Johnston will also give a talk on “Enabling high throughput in widely distributed data management and analysis systems: Lessons from the LHC” to members of the Deutsches Forschungsnetz (DFN), the German Research Network, on Wednesday, Dec. 12.

ESnet provides U.S. scientists participating in LHC with critical network links to the two Tier 1 data storage sites in the U.S. — Fermilab in Illinois and Brookhaven National Laboratory in New York. All data stored at the two sites are accessed at least in part via ESnet.


AGU Highlight: How Climate Change Affects Risk of Extreme Weather Events

In a poster session at the annual meeting of the American Geophysical Union in San Francisco last week, CRD scientists Daithi Stone and Michael Wehner presented their Weather Risk Attribution Forecast project. The real-time system seeks to understand how much greenhouse gas emissions influence the risk of extreme weather events. The project was just awarded a DOE INCITE grant of 150 million supercomputing hours, which they intend to use to run their model at finer resolutions — as small as 25 km — which will enable them to analyze the risks of shorter term storms and other severe weather. They plan to make their data public and are looking for climate scientists who may be interested in using it.


Correction: Another Invited Talk at Last Week’s AGU Meeting

In last week’s InTheLoop item on Computing Sciences researchers’ contributions to the American Geophysical Union’s fall 2012 meeting, held December 3–7 in San Francisco, we inadvertently omitted an invited talk, “13 TB, 80,000 cores and TECA: The search for extreme events in climate datasets” by Prabhat, Michael Wehner, Suren Byna, Oliver Ruebel, Fuyu Li (formerly of the Earth Sciences Division), Wes Bethel, and William Collins (Earth Sciences). Here is an abstract of their presentation:

Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We have designed TECA (Toolkit for Extreme Climate Analysis) to exploit these modes of parallelism, thereby enabling climate analysis tasks to be completed orders of magnitude faster than a serial mode of execution. We have applied TECA for detecting tropical cyclones, extra-tropical cyclones, atmospheric rivers and blocking events in climate output. As a leading example, we demonstrate the successful application of TECA for finding tropical cyclones in a massive 13 TB CAM5 0.25-degree dataset; we complete the analysis task on 80,000 cores in ~1 hour. We will present scientific results from the application of TECA to multi-resolution CAM5 and CMIP-3/CMIP-5 ensemble output, characterizing the change in extreme events across models and scenarios.


Computing Sciences Holiday Party December 11

The Computing Sciences Winter Holiday Party will take place at the Toll Room, Alumni House, UC Berkeley campus, on Tuesday, December 11, from 4:00 to 8:00 pm. Refreshments will be served. Spouses and partners are welcome, but no children under 21.

For directions and parking go here.


This Week’s Computing Sciences Seminars

Designing Next Generation Massively Multithreaded Architectures for Irregular Applications
Monday, Dec. 10, 2:00–3:30 pm, 380 Soda Hall, UC Berkeley
Antonino Tumeo, Pacific Northwest National Laboratory

Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this talk we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory reference aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.