A-Z Index | Phone Book | Careers

InTheLoop | 02.11.2013

The Weekly Newsletter of Berkeley Lab Computing Sciences

February 11, 2013

A Massive Stellar Burst, Before the Supernova

An automated supernova hunt is shedding new light on the death sequence of massive stars—specifically, the kind that self-destruct in Type IIn supernova explosions. Digging through the Palomar Transient Factory data archived at NERSC, astronomers have found the first causal evidence that these massive stars shed huge amounts of material in a “penultimate outburst” before final detonation as supernovae. Read more.

These results were published in the February 7, 2013 issue of Nature. A Nature News & Views article commenting on this discovery features an illustration from a CASTRO simulation of a collision between two shells of matter ejected by a massive star in two subsequent supernova eruptions. The CASTRO radiation hydrodynamics code was developed in the Center for Computational Sciences and Engineering (CCSE) in the Computational Research Division (CRD).


NERSC Global Filesystem Played Key Role in Neutrino Mixing Angle Discovery

Discovery of the last neutrino mixing angle — one of Science magazine’s top ten breakthroughs of the year 2012 — was announced in March 2012, just a few months after the Daya Bay Neutrino Experiment’s first detectors went online in southeast China. Collaborating scientists were thrilled that their experiment was producing more data than expected, and that a positive result was available so quickly.

But that result might not have been available so quickly without the NERSC Global Filesystem (NGF) infrastructure, which allowed NERSC staff to rapidly scale up disk and node resources to accommodate the surprisingly large influx of data. Read more.


LBNL, UC Staff Invited to NERSC Users Group Sessions on Feb. 13

NUG 2013, the annual meeting of the NERSC Users Group (NUG), will be February 12–15, 2013 at Berkeley Lab and NERSC’s Oakland Scientific Facility. All sessions will be remotely broadcast over the web.

Berkeley Lab and UC staff are invited to attend any of the Wednesday, Feb. 13 sessions, which will be held in the Building 50 Auditorium and will focus on “Trends, Discovery, and Innovation in High Performance Computing.” Highlights of the day include:

9:00

The Future of High Performance Scientific Computing

Kathy Yelick, Berkeley Lab Associate Director for Computing Sciences

9:45

NERSC Today and over the Next Ten Years

Sudip Dosanjh, NERSC Director

10:30

The 2013 NERSC Achievement Awards

 

11:00

Discovery of the Higgs Boson and the Role of LBNL and World-Wide Computing

Ian Hinchliffe, Berkeley Lab

11:30

Discovery of the θ13 Weak Mixing Angle at Daya Bay using NERSC & ESnet

Craig Tull, Berkeley Lab

1:30

The Materials Project: Combining Density Functional Theory Calculations with Supercomputing Centers for New Materials Discovery

Anubhav Jain, Berkeley Lab

2:00

Python in a Parallel Environment

David Grote, Berkeley Lab

2:30

OpenMSI: A Mass Spectroscopy Imaging Science Gateway

Ben Bowen, Berkeley Lab

3:15

The ALS Data Pipeline and Analysis Framework at NERSC

Jack Deslippe, Berkeley Lab 

3:45

Large Sparse Matrix Problems in Ab Initio Nuclear Structure Calculations

Pieter Maris, Iowa State University

4:15

iPython Notebook

Fernando Perez, UC Berkeley

Abstracts for talks listed above are available here. The full agenda for NUG 2013 and a link to registration are available here.


Nick Wright Named Advanced Technologies Group Lead

Nick Wright has been named head of NERSC’s Advanced Technologies Group (ATG), which focuses on understanding the requirements of current and emerging applications to make choices in hardware design and programming models that best serve the science needs of NERSC users. In addition to managing a team of three, Wright will work closely with NERSC management to set strategic directions for the facility. To provide input for HPC system procurements, he will collaborate with the Computational Research Division’s Future Technologies Group (FTG) to assess emerging technologies in architecture, algorithms, parallel programming paradigms, and languages. Read more.


IEEE Computer Society Honors Lenny Oliker

Lenny Oliker of CRD’s Future Technologies Group has received the received the IEEE Computer Society’s Golden Core award. The Golden Core award was established in 1996 and recognizes individuals for longstanding service to the society. Each year the awards committee selects recipients from a pool of qualified candidates and permanently includes their names in the Golden Core Member master list.


NERSC Staff Participate in Regional Science Bowl

High school students from all corners of the San Francisco Bay Area flocked to Berkeley Lab on Saturday, February 2, 2013 to battle in the Department of Energy’s Regional Science Bowl—an academic competition that tests students’ knowledge in all areas of science. After a day of intense competition, the team from Palo Alto High School emerged as the overall winners. The Palo Alto team will travel to Washington, D.C. in April to compete in the national competition.

A number of NERSC staff participated in Saturday’s event as moderators, scientific judges, timekeepers, and scorekeepers, including Elizabeth Bautista, Shane Canon, Isaac Ovadia, David Skinner, and Jay Srinivasan. Read more.


John Shalf Speaks at HPC Advisory Council Conference

The HPC Advisory Council, together with Stanford University, held the HPC Advisory Council Stanford Conference on February 7–8, 2013, at Stanford, California. The conference focused on High-Performance Computing (HPC) usage models and benefits, the future of supercomputing, latest technology developments, best practices, and advanced HPC topics. NERSC Chief Technology Officer John Shalf spoke on “Energy Efficiency and Its Impact on Requirements for Future Programming Environments.”


Former DOE CSGF Fellow Leslie Dewan Makes Top 30 List in Forbes

Leslie Dewan, a former DOE Computational Science Graduate Fellow who spent the summer of 2011 working in CCSE, has been named to Forbes magazine’s “30 Under 30: Energy” list. This list seeks to identify “the field’s brightest stars under the age of 30.” Dewan, currently a PhD student at MIT, is also the co-founder and CEO of Transatomic Power, a new company that seeks to “turn nuclear waste into a safe, clean and scalable source of electricity.” See the Forbes profile here, her Forbes interview here, and her TED talk here.


This Week’s Computing Sciences Seminars

NERSC User Day

Wednesday, February 13, 8:30 am–5:00 pm, Bldg. 50 Auditorium

See schedule and link to abstracts in article above.

BEARS 2013: Berkeley EECS Annual Research Symposium

Wednesday, February 13, 8:45 am–12:30 pm, Chevron Auditorium, International House, UC Berkeley
Registration is required.

9:00–9:30 am, Edward Lee, “The Swarm at the Edge of the Cloud”
New research that aims to enable the simple, reliable, and secure deployment of a multiplicity of advanced distributed sense-and-control applications on shared, massively distributed, heterogeneous, and mostly uncoordinated swarm platforms through an open and universal systems architecture.

9:30–10:00 am, Krste Asanovic, The ASPIRE Project
Announcing a new five-year research project that recognizes the shift from transistor-scaling-driven performance improvements to a new post-scaling world where whole-stack co-design is the key to improved efficiency. Building on the success of the soon to be completed Par Lab project, it uses deep hardware and software co-tuning to achieve the highest possible performance and energy efficiency for future mobile and rack computing systems.

10:00–10:30 am, Richard Karp,Simons Institute for the Theory of Computing
A 10-year, $60M institute that Berkeley won in May as a result of a nationwide competition. The Institute will bring together scientists from around the world to explore fundamental issues in the theory of computing and cast a “computational lens” on phenomena in biology, physics, economics, digital systems, engineering and commerce that can be described in terms of computational processes.

10:30–11:00 am, David Patterson, “Using Big Data Analytics”
Using Big Data analytics to analyze cancer tumor genomes, which involves computer scientists from Berkeley, Intel, and Microsoft to help fight the war on cancer. His talk covers new and much faster and more accurate genetic analysis pipelines being developed at Berkeley and the technical, cost, and policy issues required to create a Million Cancer Genome Warehouse.

11:25 am–12:25 pm, Hot Topics at EECS Research Centers: Grad Student Presentations
Go here for list of presentations.

Speeding Up Seismic Imaging with Reduced-Order Modeling: Scientific Computing and Matrix Computations Seminar

Wednesday, February 13, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
Victor Pereyra, Stanford University

Seismic imaging is one of the most computing intensive challenges that the oil industry faces. We started a few years back to tinker with the idea of applying model order reduction to wave propagation simulation, and we are now testing the technique on the solution of earth tomography problems when using full wave form seismic data. In this presentation we will briefly describe what has been done so far and what are the future prospects of the technique. The real challenge is large scale 3D in a single machine and how to manipulate very large dense matrices.

End-System Optimizations for High Speed Networks

Wednesday, February 13, 1:30–2:30 pm, 50B-2222
Vishal Ahuja, University of California, Davis

High speed networks have become essential for the operations within a datacenter and also for operations which involve communication between datacenters. Even though the networks connecting the data centers are high speed, the applications themselves run on commodity multicore machines whose CPU frequencies are no longer scaling. The networks used by datacenters allow the reservation of 10 Gbps or higher dedicated channels between two end-points. In such a scenario, the bottleneck for end-to-end data transfer has shifted to the end-system, and is no longer in the network. As a result, we need transport protocols which are end-system aware.

To this end, I will discuss the design of the Flow Bifurcation Manager (FBM), which takes into account the following features of the receiving end-system: i. The workload on individual cores. ii. The number of cores. iii. The NIC’s flow classification function. It uses this to determine the number of parallel flows that the sender should use, and the transmission rate for each of the flows. FBM is shown to perform better than GridFTP, particularly when the end-system experiences packet loss in the kernel ring buffer.

We also found that cache affinity plays a very important role in determining the throughput of end-to-end data transfers. The Cache-Aware Affinity Deamon (CAAD) analyzes the topology of the die and the NIC characteristics and conveys information to the sender which allows the entire end-to-end path for each new flow to be managed and controlled. CAAD performs better than the state of the art techniques like Receive Flow Steering (RFS) and Receive Packet Steering (RPS). I will discuss the design of CAAD and the experimental results on our 10 GigE testbed.

Making Applications That Use Statistical Analysis Easier to Build and Maintain

Thursday, February 14, 3:00–4:00 pm, 430/438 Soda Hall (Wozniak Lounge), UC Berkeley
Christopher Re, University of Wisconsin

The question driving my work is: How should one deploy statistical data-analysis tools to enhance data-driven systems? Even partial answers to this question may have a large impact on science, government, and industry—each of which is increasingly turning to statistical techniques to get value from their data.

To understand this question, my group has built or contributed to a diverse set of data-processing systems: a system, called GeoDeepDive, that reads and answers questions about the geology literature and is used by geologists to gain insight into the Earth’s carbon cycle; a muon filter that is used in the IceCube neutrino telescope to process over 250 million events each day in the hunt for the origins of the universe; and a host of enterprise analytics applications with Oracle and EMC/Greenplum. Even in this diverse set, we have found common abstractions that enable one to build and maintain such systems in a more cost-effective way.

In this talk, I will describe some of these abstractions along with the theoretical and algorithmic questions that they raise. Finally, I will describe my vision of how and why classical data management will continue to play an important role in the age of statistical data analysis.

Papers, software, virtual machines that contain installations of our software, links to applications that are discussed in this talk, and our list of collaborators are available from http://www.cs.wisc.edu/hazy. We also have a YouTube channel (http://www.youtube.com/HazyResearch) that contains videos about our projects.

 



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.