A-Z Index | Phone Book | Careers

InTheLoop | 05.13.2013

May 13, 2013

Heady Mathematics: Describing Popping Bubbles in a Foam

Bubble baths and soapy dishwater, the refreshing head on a beer and the luscious froth on a cappuccino. All are foams, beautiful yet ephemeral as the bubbles pop one by one. Now, James A. Sethian and Robert I. Saye from Berkeley Lab’s Computational Research Division (CRD) and the University of California, Berkeley, have described mathematically the successive stages in the complex evolution and disappearance of foamy bubbles, a feat that could help in modeling industrial processes in which liquids mix or in the formation of solid foams such as those used to cushion bicycle helmets. Read more.


Aydin Buluç Wins 2013 DOE Early Career Award

Aydin Buluç of CRD has been honored with a 2013 Department of Energy Early Career Award for his work on energy-efficient parallel graph and data mining algorithms. With this award, Buluç will develop a new family of algorithms to drastically reduce the energy footprint and running time of graph and sparse matrix computations that form the basis of various data mining techniques. Read more.


CS Staff Win Best Paper Award at Cray User Group Meeting

Four CRD and NERSC staff members and a collaborator from Cray Inc. won the Best Paper award at the Cray User Group meeting, CUG2013, held May 6–9 in the Napa Valley. Surendra Byna, Andrew Uselton, Prabhat, David Knaak (Cray Inc.), and Yun (Helen) He won the award for “Trillion Particles, 120,000 Cores, and 350 TBs: Lessons Learned from a Hero I/O Run on Hopper.”

Here is the abstract:

Modern peta-scale applications can present a variety of configuration, runtime, and data management challenges when run at scale. In this paper, we describe our experiences in running VPIC, a large-scale plasma physics simulation, on the NERSC production Cray XE6 system Hopper. The simulation ran on 120,000 cores using ∼80% of computing resources, 90% of the available memory on each node and 50% of the Lustre scratch file system. Over two trillion particles were simulated for 23,000 time steps, and 10 one-trillion particle dumps, each ranging between 30 and 42 TB, were written to HDF5 files at a sustained rate of ∼27 GB/s. To the best of our knowledge, this job represents the largest I/O undertaken by a NERSC application and the largest collective writes to single HDF5 files. We outline several obstacles that we overcame in the process of completing this run, and list lessons learned that are of potential interest to HPC practitioners.

Correction: In last week’s InTheLoop item on the Cray User Group meeting, Andrew Uselton’s name was inadvertently omitted from a list of staff contributions. Andrew was lead author of “A File System Utilization Metric for I/O Characterization,” co-authored by Nicholas Wright. We apologize for the error.


National Day of Civic Hacking Will Be June 1–2

The National Day of Civic Hacking is a national event that will take place June 1–2, 2013, in cities across the nation. The event will bring together citizens, software developers, and entrepreneurs from all over the nation to collaboratively create, build, and invent new solutions using publicly released data, code and technology to solve challenges relevant to our neighborhoods, our cities, our states and our country. National Day of Civic Hacking will provide citizens an opportunity to do what is most quintessentially American: roll up our sleeves, get involved and work together to improve our society.

Bay Area events include:


This Week’s Computing Sciences Seminars

Parallel Query Evaluation by Control Flow Migration

Monday, May 13, 10:00–11:00 am, 50B-4205
Eric Hoffman, ET International

Relational programming is widely used in large scale data analytics and web services. The restricted semantics of the relational model and the aggregate performance requirements of these applications make them well suited for the exploration of parallel evaluation strategies.

This talk will describe the mapping of relational queries onto an abstract machine, and the implementation of that machine on a commodity distributed memory system for a commercial database offering. In doing so it will touch on a variety of topics central to the design of scalable distributed services, such as consistency in the presence of faults.

Efficient and Easily Programmable Accelerator Architectures
Monday, May 13, 2:00–3:30 pm, 430 Soda Hall (Wozniak Lounge)
Tor Aamodt, University of British Columbia

Current projections suggest semiconductor scaling may end near the 7nm process node within 10 years. Energy efficiency is already a primary design goal due to the end of voltage scaling. Programmable accelerators such as graphics processing units (GPUs) can potentially enable further reductions in the cost of computation along with further increases in computing efficiency. However, GPUs are typically perceived as suitable only for a narrow range of applications such as high performance computing. This talk will describe recent research on hardware changes to broaden the range of applications that benefit from GPU-like accelerators. Approaches discussed will include introducing transactional memory and coherence into GPUs as well as improving cache utilization via hardware thread scheduling.

HPC Brown Bag: Julian Borrill
Wednesday, May 15, 12:00–1:30 pm, OSF 943-238
Julian Borrill, LBNL/CRD

Prototyping Abstractions for Stencil-Based Computations on a Structured Mesh
Thursday, May 16, 10:00–11:00 am, 50B-4205
John Bachan, FLASH Center for Computational Science, University of Chicago

In the first part of this talk I will present a set of software abstractions that facilitate, in a natural way, the development and optimization of stencil-based computations on a structured mesh. To demonstrate feasibility, the mesh abstractions were prototyped as a standalone C++ library and applied to two example codes: a finite-volume (FV) shock-capturing code, and a Lagrangian hydrodynamics code. The implementation was verified against standard test problems which will be briefly described. Moreover, I will identify potential optimizations that can ameliorate the resultant performance.

For the second half, I will present a space-efficient and performant data-structure for storing the global distribution of AMR octree meta-data locally on each processor. This method was implemented in the FLASH code to support an efficient implementation of fluid-structure interactions. The benefits of the method are demonstrated through strong-scaling of a critical routine in the application.


Link of the Week: Why E. O. Wilson Is Wrong

Last week’s Link of the Week was a Wall Street Journal essay by biologist E. O. Wilson asserting that knowledge of advanced mathematics is not a prerequisite for success in every scientific discipline. In rebuttal, CRD’s David Bailey and his collaborator Jonathan Borwein write in their Huffington Post blog:

That may possibly have been true 20 or 40 years ago, but it is certainly not true today. Literacy, even expertise in algebra, calculus, statistics and “discrete mathematics” (e.g., matrices) is already and will be essential….

Indeed, one need only review the trajectory of biology for the past few decades to see that biology, like many other scientific disciplines, has gone from being math-and-computer-poor to math-and-computer-rich….

Scientists of the future, whether they be physicists, chemists, biologists, sociologists or medical researchers, will rely on deep understanding of computational techniques, machine learning, advanced visualization methodologies and statistical analysis protocols. Mathematics is the foundation for all of this.

Read more.