A-Z Index | Phone Book | Careers

InTheLoop | 06.13.2011

June 13, 2011

Voyager, Computer Models Find Magnetic Froth at Solar System's Edge

The edge of our solar system may not be smooth, as scientists expected, but a churning sea of magnetic bubbles. Using a computer model based on Voyager data and run at NERSC, scientists have shown that the sun’s magnetic field becomes bubbly in the heliosheath. The same magnetic process that powers solar flares and auroras is to blame. The bubbles may help explain how some mysterious, highly energetic particles make it to Earth. Read more.


ESnet Participates in World IPv6 Day

On June 8, 2011, content providers, universities, ISPs, and other network organizations engaged in activities related to World IPv6 Day. This massive exercise was akin to a “test drive” where content was shared and networks were configured to support Internet Protocol version 6 (IPv6), which is already supplanting the current Internet Protocol version 4 (IPv4). The point of this dry run is to raise awareness of IPv6, provide feedback on what works and what gets broken, and to allow for an overall smoother transition to the new IPv6 protocol as IPv4 addresses are running out.

As a network pioneer, ESnet has made most of its public content available over IPv6 for several years. However, ESnet still took a number of actions to further engage the community on World IPv6 Day (read more). ESnet network engineer Michael Sinatra answers question about IPv6 on the DOE Energy Blog.


Kathy Yelick Announces Transitional Organization Structure

With the announcement of Juan Meza’s appointment as Dean of Natural Sciences at UC Merced, Associate Lab Director Kathy Yelick has announced a transition plan. Meza will continue in his role as Acting Division Director for CRD throughout the summer. His other role as High Performance Computing Research Department head will be redistributed during the transition, with new points of contact (POCs) for particular areas:

  • John Bell will be acting Math POC.
  • Kathy Yelick will serve as Computing Sciences POC (the rest of the ASCR research portfolio).
  • Deb Agarwal will be non-ASCR POC for projects funded by BES, BER, HEP, NP, EERE, and others.
  • The ESnet Department will report directly to Yelick during this transition, and Steve Cotter continues to be ESnet POC.

The POCs will ensure that we have a consistent view of decisions being made at DOE about program announcements, calls for proposals, funding, and other issues.


ESnet’s OSCARS Honored in UC Larry L. Sautter Award Program

The University of California Information Technology Leadership Council has announced that ESnet’s On-Demand Secure Circuits and Reservation System (OSCARS) was selected for honorable mention in the 2011 Larry L. Sautter Award Program. The Sautter Award was established in 2000 to encourage and recognize innovative deployment of information technology in support of the University’s mission. The Sautter Award honors projects developed by faculty and staff in any department at the ten UC campuses, the UC Office of the President (UCOP), and Lawrence Berkeley National Laboratory. The awards will be presented at the UC Computing Services Conference at UC Merced, August 8, 2011.


Jamie Sethian Is Named Einstein Visiting Fellow in Berlin

Jamie Sethian, Professor of Mathematics at UC Berkeley and head of Berkeley Lab’s Mathematics Group, is one of two mathematicians recently named Einstein Visiting Fellows to the Berlin Mathematical School. An Einstein Visiting Fellow is not a conventional visiting scientist who attends a university or research institution in Berlin for just one semester. The fellows, funded by the Einstein Foundation Berlin, are asked to become long-term partners with the science and research community in Berlin.


Berkeley Grad Student Earns Distinguished Paper Award at Euro-Par 2011

Edgar Solomonik, a Computer Science Ph.D. student at UC Berkeley, and Prof. Jim Demmel, who holds a joint appointment in CRD’s Scientific Computing Group, are co-authors of a “Distinguished Paper” to be presented at the upcoming Euro-Par 2011 conference to be held Aug. 29–Sept. 1 in Bordeaux, France. Solomonik is also a fellow in DOE’s Computational Science Graduate Fellowship Program. The paper is entitled “Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms.”

According to Demmel, the most expensive operation any computer performs, measured in time or energy, is “communication,” i.e., moving data, either between levels of a memory hierarchy (e.g., cache, main memory, and disk) or between processors over a network. This paper is part of ongoing research to develop so-called “communication-avoiding” algorithms that provably communicate as little data as possible. Recently proven lower bounds on communication for linear algebra algorithms are much lower than what current algorithms actually communicate, suggesting that new, much faster and more energy efficient algorithms might be found that do attain these lower bounds. In this paper, for the first time, new algorithms for the most basic dense linear algebra algorithms (matrix multiplication and solving linear equations) are derived that do attain these lower bounds, moving large factors less data than the standard algorithms. Significant speedups were demonstrated on an IBM BG/P.


CS Is Hosting Householder Symposium This Week in Tahoe City

Computing Sciences is hosting the 18th Householder Symposium at the Granlibakken Conference Center & Lodge in Tahoe City, CA this week (June 12–17, 2011). The symposium, which is considered one of the most important meetings on numerical linear algebra, is organized this year by Esmond Ng of CRD’s Scientific Computing Group (SCG).

Jim Demmel, a professor at UC Berkeley and a faculty scientist in SCG, is one of plenary speakers, with the topic “Avoiding Communication in Numerical Linear Algebra.” Other invited participants from SCG are:

  • Zhaojun Bai, “Progress in Linear and Nonlinear Eigensolvers” (poster)
  • Ming Gu, “Reduced Rank Regression via Convex Optimization” (poster)
  • Sherry Li, “Towards an Optimal Parallel Approximate Sparse Factorization Algorithm Using Hierarchically Semi-Separable Structures” (poster)
  • Esmond Ng, “A Combinatorial Problem in Sparse Orthogonal Factorization”
  • Ichitaro Yamazaki, “A Parallel Hybrid Linear Solver for Large-Scale Highly Indefinite Linear Systems of Equations” (poster)
  • Chao Yang, “Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculation” (poster)

Lab Staff Add HPC Expertise to 2011 International Supercomputing Conference

Each June, the global HPC community convenes in Germany for the International Supercomputing Conference, now the world’s second largest HPC meeting. And each year, Berkeley Lab staff add their expertise to the international lineup of speakers. ISC’11 will be held June 19–23 at the Congress Center Hamburg. Here is a list of LBNL participants:

Sunday, June 19

  • Hank Childs of CRD’s Visualization Group will lead a half-day tutorial on “Parallel Visualization for Very Large Data Simulations.” Co-led by Jean Favre of the Swiss National Supercomputing Center, the tutorial will focus on VisIt, an open source visualization and analysis tool designed for processing large data. Childs is also the architect of VisIt.

Monday, June 20

  • Erich Strohmaier, head of CRD’s Future Technologies Group, will present the 37th edition of the twice-yearly TOP500 List of the world’s most powerful supercomputers.
  • Berkeley Lab Deputy Director Horst Simon will add the U.S. perspective to an international panel on “Trans-Petaflop/s Initiatives,” joining experts from China, Japan, Germany, and Russia.
  • John Shalf, head of NERSC’s Advanced Technologies Group, will chair a panel discussion on “Heterogeneous Systems and their Challenges to HPC Systems.”

Tuesday, June 21

  • Simon will chair a debate on “GPUs—The Fast Lane on the Road to Better Science?”
  • Strohmaier, one of the original authors of the TOP500 list, and Simon, also an editor of the list, will participate in an in-depth Q&A discussion of the latest list and what it means to the HPC community.
  • Strohmaier will also co-chair a Birds-of-a-Feather session on “Establishing Compute Energy Efficiency Metrics.”

Thursday, June 23

  • Strohmaier will host a session featuring talks by “Young and Bright HPC Researchers.”

ICCS Members Collaborate on Award-Winning Paper

One year after Lawrence Berkeley National Laboratory and UC Berkeley established the International Center for Computational Science (ICCS) with partners at the University of Heidelberg in Germany and the National Astronomical Observatories of the Chinese Academy of Sciences in China, the first research paper submitted by ICCS-affiliated researchers will be honored with the PRACE Award. The award, sponsored by the Partnership for Advanced Computing in Europe (PRACE), will be presented to the authors at the 2011 International Supercomputing Conference.

The paper, entitled “Astrophysical Particle Simulations with Large Custom GPU Clusters on Three Continents,” will be presented on Monday, June 20, at ISC’11 in Hamburg, Germany. Read more.


ESnet’s Kevin Oberman Engineers Engaging, Successful Career

Kevin Oberman’s career path has been marked by surprising events trumping his expectations and plans, leading to a 37-year career in networking and engineering first at Lawrence Livermore and then at Berkeley Lab as a senior network engineer with ESnet. He will retire on July 1. Read more.


Funding Available to Attend Grace Hopper Conference

The Grace Hopper Celebration of Women in Computing conference will be held in Portland, Oregon, November 9–12, 2011.

This conference is designed to bring the research and career interests of women in computing to the forefront. Presenters are leaders in their respective fields, representing industrial, academic, and government communities. Leading researchers present their current work, while special sessions focus on the role of women in today’s technology fields, including computer science, information technology, research, and engineering.

The Berkeley Lab Computing Sciences Diversity Working Group is able to pay the travel and registration of a few Lab staff to attend. If you would like to attend the Grace Hopper Conference and would like to have your travel costs sponsored by the Diversity Working Group, please contact Deb Agarwal. Send the following information:

  • Name:
  • Supervisor:
  • Paragraph giving the reason you would like to attend Grace Hopper:

The deadline for submitting applications is June 20, 2011.


July 1 Is Application Deadline for DOE ACTS Collection Workshop

The 12th Workshop on the DOE Advanced Computational Software (ACTS) Collection, “Scalable and Robust Computational Tools for High-End Computing,” will be held at Sutardja Dai Hall on the UC Berkeley campus on August 16–19, 2011. The application submission deadline is July 1, 2011, and all students and postdocs requesting funding should submit a recommendation letter by July 1.

The workshop, organized by Berkeley Lab, will present an introduction to the DOE ACTS Collection for application scientists whose research demands include either large amounts of computation, the use of robust numerical algorithms, or combinations of these. The workshop will include a range of tutorials on the tools currently available in the collection, discussion sessions aimed to solve specific computational needs by the workshop participants, and hands-on practices using state-of-the-art supercomputers at NERSC. Presenters are tool developers from DOE National Laboratories.

Application for this workshop is open to computational scientists from industry and academia. Workshop fees are fully sponsored by the DOE’s Office of Science. In addition, DOE will sponsor travel expenses and lodging for a limited number of graduate students and postdoctoral fellows. For more information on the workshop, please contact Tony Drummond at (510) 486-7624.


This Week’s Computing Sciences Seminars

Crossing the Memory Wall: The Application-Specific Approach
Monday, June 13, 10:30 am–12:00 pm, 50B-4205
Xian-He Sun, Chair, Department of Computer Science, and Director, Scalable Computing Software Laboratory, Illinois Institute of Technology

Data access is a known bottleneck of high performance computing (HPC). The prime sources of this bottleneck are the performance gap between the processor and disk storage and the large memory requirements of ever-hungry applications. Although advanced memory hierarchies and parallel file systems have been developed in recent years, they only provide high bandwidth for contiguous, well-formed data streams, performing poorly in serving small and noncontiguous data requests. There is no general solution for the memory-wall problem. The problematic data-access wall remains after years of study and, in fact, is becoming probably the most notorious bottleneck of HPC.

We propose a new dynamic application-specific I/O architecture for HPC. Unlike traditional I/O designs, where data is stored and retrieved by request, our architecture is based on a novel “server-push” model in which a data access server proactively pushes data from a file server to the memory and makes smart decisions on data layout based on the data access patterns of the underlying application. Here dynamic means that the data layout and prefetching mechanisms can be changed dynamically and automatically between applications and even within one application. In this talk, we present the design consideration and implementation results under MPICH2 and PVFS of the dynamic application-specific approach. We also discuss the possible system and hardware support to extend the application-specific architecture to system integration and to cache and memory data access, respectively.

OSF Brown Bag: No-Cost ZFS On Low-Cost Hardware
Tuesday, June 14, 12:00–1:00 pm, OSF Room 238
Trever Nightingale, NERSC

Today’s data generation rates and terabyte hard drives have led to a new breed of commodity servers that have very large filesystems. This talk looks at the implications and describes the decision to deploy ZFS under FreeBSD on NERSC servers. Included will be a look at pertinent ZFS features, configuration decisions adopted, experiences with ZFS so far, and how we are using the features ZFS brings with it to gain some new functionality that was not possible with previous filesystems.


Link of the Week: Chinese Supercomputer Breaks World Record in Application Performance

“In case you were wondering if these new-fangled Chinese GPU-powered supercomputers can do anything useful, Thursday's announcement about the latest exploits of the Tianhe-1A system should give you some idea of the significance of these petascale beasts,” writes Michael Feldman in HPCwire:

On Thursday, researchers from the Chinese Academy of Sciences' Institute of Process Engineering (CAS-IPE) claimed to have run a molecular simulation code at 1.87 petaflops — the highest floating point performance ever achieved by a real-world application code. The simulation is being used to help discern the behavior of crystalline silicon, a material used in solar panels and semiconductors….

Over and above the impressive FLOPS is the larger significance of using the technology to propel science and engineering forward. Last year, NVIDIA Tesla GM Andy Keane, penned an opinion piece warning that the lagging adoption of GPU in HPC could threaten the country's competitive edge. While that editorial could easily be construed as self-serving for his employer's interests, the fact is that the US and Europe have lagged countries like China and Japan in adopting this technology for their most elite systems. Those nations saw the revamped graphics chip as the most economical path to petascale machines.

Of course, there are valid reasons to be wary GPU computing for HPC — programmability difficulties, over-hyping of performance, proprietary software, etc. — leading many in the HPC community to be extra careful about adopting the technology. But the negative backwash from the original flood of hype can be as ill-informed as the initial exaggerations. In the current issue of HPCwire, Stone Ridge Technology CEO and GPU enthusiast Vincent Natoli, offers a nice set of rebuttals to the major objections to GPU computing. If you're a GPGPU fence-sitter, it's definitely worth a read….

It's also best to see this achievement in the larger context of what the Chinese scientific community is doing. A recent article in Forbes points out that China is quickly catching up to US in scientific output, and in some cases surpassing it:

In 2009, for the first time, Chinese researchers published more papers in information technology than those in the U.S., with both countries churning out more than 100,000 info-tech publications. In clean and alternative energy, Chinese researchers have likewise been publishing up a storm, not surpassing U.S. researchers but coming close.

The bottom line is that the US is in danger of losing its technological edge, which it has basically enjoyed, unchallenged, since the end of World War II. It's not that GPU computing is the magic bullet here. But news like this should be a wake-up call to American HPC'ers and policy-makers that sometimes being extra careful is the riskiest proposition of them all.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.