A-Z Index | Phone Book | Careers

InTheLoop | 03.04.2013

The Weekly Newsletter of Berkeley Lab Computing Sciences

March 4, 2013

MIT’s Ernest J. Moniz Nominated Secretary of Energy

President Barack Obama today nominated Massachusetts Institute of Technology professor Ernest Moniz as energy secretary. Moniz, a physicist, lends Obama’s Cabinet scientific heft and brings prior Washington experience. At MIT, he has directed the school’s Energy Initiative, where he oversaw reports on almost every aspect of energy. Moniz, who served as associate director of the White House office of science and technology policy and as undersecretary of energy under President Bill Clinton, is also devoted to the “all-of-the-above” strategy for energy that Obama has embraced. Read more.


NERSC Invites Users to Try New Cray XC30, Edison Phase I

All user accounts have been enabled on the first phase of Edison, NERSC’s newest HPC resource, a Cray XC30. The system was installed last December and is currently in the pre-production phase, a time when NERSC users are given access to the machine, and also when NERSC and Cray staff are still making improvements to the system. Users are invited to run jobs on the Edison Phase I system free of charge in exchange for providing feedback to NERSC staff about their experiences using Edison.

The Edison Phase I system has 664 compute nodes and 10,624 cores. Each node has two eight-core Intel “Sandy Bridge” processors running at 2.6 GHz (16 cores per node), and has 64 GB of memory. While the user environment on Edison is remarkably similar to that on Hopper, a number of new features and technologies are available on the Edison Phase I system, including the Cray Aries high speed interconnect, Hyper-Threading technology, the Sonexion storage system, and an external batch server.

The Edison Phase I system will be targeting mid-sized jobs using roughly a couple hundred cores. Users running applications larger than ~1000 cores will see better throughput on Hopper until the second phase of Edison arrives in a few months, when it will be joined with the first phase of Edison to create NERSC’s next petaflop system with over 100,000 cores. More information about Edison can be found here.


Computing Sciences Hosts Joint Workshop with Tsukuba University

The Computing Sciences Directorate at Berkeley Lab and the Center for Computational Sciences at the University of Tsukuba, Japan will have a joint meeting at the lab this Thursday and Friday, March 7–8, 2013, in the Bldg. 50F-1647 conference room. The meeting will feature research areas that may be of interest to both institutions. Esmond Ng, head of CRD’s Applied Mathematics and Scientific Computing Department, will host the seminar. Here is the agenda:

Thursday, March 7, 2013

9:00  Kathy Yelick: Welcome and The DEGAS Project
9:45  Mitsuhisa Sato: Updates of CCS and XcalableMP Projects
10:30  Break
11:00  Yili Zheng: PGAS Programming System Research
11:30  Steve Hofmeyr: Tessellation: A New Operating System for Manycore
12:00  Lunch and Discussions
13:15  Hiro Tadano: A Block Krylov Subspace Method for Computing High Accuracy Solutions and Its Stabilization
13:45  Chao Yang: Computing a Large Number of Eigenpairs on Multi-/Many-core Systems
14:15  Go Ogiya: Study of the Core-Cusp Problem in Cold Dark Matter Halos using N-Body Simulations on GPU Clusters
14:45  Break
15:15  Arie Shoshani: The Scalable Data Management, Analysis, and Visualization SciDAC Institute
15:45  Dan Gunter: The Materials Project and Tigres, Workflows for Big Data
18:30  Dinner (by invitation) and discussions

Friday, March 8, 2013

9:00  Taisuke Boku: HA-PACS and TCA for GPU Direct Communication
9:30  John Shalf: The Evolution of Programming Models in Response to Hardware Constraints
10:00  Break
10:30  Hideyuki Kawashima: Stream Data Processing with SS*
11:00  John Wu: FastBit: An Efficient Compressed Bitmap Index Technology
11:30  Lunch and discussions

For additional information, such as site access or directions to the conference room, please contact CSSeminars-Help@hpcrd.lbl.gov.


Greg Bell to Speak at ASCAC Meeting, Give Keynote at CENIC Conference

The DOE’s Advanced Scientific Computing Advisory Committee (ASCAC) is holding its Spring 2013 meeting this Tuesday and Wednesday, March 5–6, at the American Geophysical Union in Washington, DC. On Wednesday, ESnet Director Greg Bell will give a televideo presentation on ESnet-5. Go here for the complete agenda.

Bell with also give a keynote presentation at the 2013 CENIC Annual Conference, “Building Blocks for Next Gen Networks,” held March 11–13 at the California Institute for Telecommunications and Information Technology (Calit2) on the UC San Diego campus. The workshop brings together members of the K-20 research and education community in California with colleagues from around the world for presentations, demonstrations, and keynote speakers illustrating the value of the high-performance networking made possible by the California Research and Education Network to their communities.

CENIC, the Corporation for Education Network Initiatives in California, designs, implements, and operates CalREN, the California Research and Education Network, a high-bandwidth, high-capacity Internet network specially designed to meet the unique requirements of these communities, and to which the vast majority of the state’s K-20 educational institutions are connected.

“CENIC is one of the most advanced networks in the world, and its ten million users include many DOE-funded scientists and collaborators,” Bell said. “I’m looking forward to sharing ESnet’s perspective on how we can work together to build campus networks that help everyone move data faster.”


CASTRO Supernova Simulation Appears on Two NCSA Magazine Covers

Images from a recent Type Ia supernova post-ignition simulation using the Center for Computational Science and Engineering’s (CCSE’s) CASTRO code are featured on the covers of the 2012 summer and fall issues of the National Center for Supercomputing Applications’ (NCSA’s) Access magazine. The CASTRO simulation was performed by Chris Malone and Stan Woosley of UC Santa Cruz and Andy Nonaka of CCSE, and used 24 million CPU hours from a Blue Waters Early Science System Award. This simulation ran on 64,000 cores and used five levels of adaptive mesh refinement (AMR) in order to realize an unprecedented ~100 m resolution.


Berkeley Lab Contributes to 2013 Tapia Computing Conference

The 2013 Richard Tapia Celebration of Diversity in Computing Conference was the most successful in the 12-year history of the conference, with Berkeley Lab Computing Sciences making key contributions. Held Feb. 7–10 in Washington, D.C., the 2013 conference drew a record 550 participants, 60 percent of whom were students. Read more.


Software Carpentry Course and Boot Camps This Week

The IT Division is hosting a special two-hour course created just for Berkeley Lab scientists and program managers designed to help you get the most out of your software development projects and avoid common pitfalls. The course will take place today, Monday March 4, from 2:00–4:00 pm in Perseverance Hall (54-130).

Scientific projects with limited software development resources often get into trouble. Sometimes the postdoc developer leaves, or the expectations around the software begin to grow as the project gets larger. Whether you’re in this early stage or already responsible for a large development effort, this course will share concepts, tips, and horror stories that will hopefully help you to give your projects the best chance of success. The focus of the course is on managing software development projects for science, not actual coding or development.

The course will be taught by Greg Wilson, founder of the nonprofit Software Carpentry Foundation, which works to help scientists grow their computational faculties. Wilson has decades of experience working to help scientists and coding teams be successful.

The course was designed with input from LBNL IT and NERSC and informed by Greg’s many years of experience. There is no charge. Please register for the course here.

A very small number of spots are still open for the Software Carpentry Boot Camps as well — two-day sessions dedicated to teaching scientists how to be better computer users. This free class will cover everything from using the shell to beginning scientific programming. There are two classes with identical content: March 4–5 and March 6–7. For detailed information, go here. To register, go here.


This Week’s Computing Sciences Seminars

Domain-Specific Abstractions and Compiler Transformations

Monday, March 4, 12:00–1:30 pm, OSF 943-238
Saday Sadayappan, Ohio State University

Recent trends in architecture are making multicore parallelism as well as heterogeneity ubiquitous. This creates significant challenges to application developers as well as compiler implementations. Currently it is virtually impossible to achieve performance portability of high-performance applications, i.e., develop a single version of source code for an application that achieves high performance on different parallel computer platforms. Different implementations of compute intensive core functions are generally needed for different target platforms, e.g., for multicore CPUs versus GPUs.

A promising approach to seek performance portability, i.e., “write once, execute anywhere,” is via identifying suitable domain-specific abstractions and compiler techniques to transform high-level specifications automatically to generate high-performance implementations on different targets. The talk will report on efforts to develop performance-portable compiler techniques for two domains: stencil computations and tensor contractions.

Iterative Ranking from Pair-Wise Comparisons

Monday, March 4, 2:30–3:30 pm, 400 Cory Hall, UC Berkeley
Devavrat Shah, MIT

The question of aggregating pair-wise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e.g. player’s rating) is of interest for understanding the intensity of the preferences.

We propose an iterative rank aggregation algorithm for discovering scores for objects from pair-wise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the scores turn out to be the stationary probability of this random walk. The algorithm is model independent. To establish its efficacy, however, the popular Bradley-Terry-Luce (BTL) model is considered. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of comparison graph – when the Laplacian of the comparison graph has constant spectral gap, e.g. pairs chosen at random for comparison, this leads to near-optimal dependence on the number of samples.

This is based on a joint work with Sahand Negahban (MIT) and Sewoong Oh (UIUC).

New First-Principles Methods for Electronic Structure Calculations of Defects and Impurities

Tuesday, March 5, 10:00–11:00 am, 50B-2222
Gaigong Zhang, University of California, Davis

First-principles computational materials modeling allows us to obtain a more profound understanding and more accurate estimations of the properties of physical systems than classical methods. This approach can also predict the behavior of a desired new candidate material with targeted properties. Density functional theory within the Kohn-Sham formalism provides a very efficient routine to approximate the intractable many-body problem. Within such a framework, developments of new methodologies and improvements to numerical algorithms make the simulations more reliable by employing functionals with higher accuracy on one hand, and enabling the calculation of large-scale systems by computationally efficient algorithms on the other hand. In this talk, I will present new DFT-based methods for the study of defects and impurities, as well as results using the new and standard methods for shallow impurity level binding energies and the investigation of native defects in PbI2 as an ultra-fast scintillator for gamma ray detection.

Our new method for calculating shallow impurity levels in bulk semiconductors combines the GW approach for the treatment of the central-cell potential with a potential patching method for large systems (with 64,000 atoms) to describe the impurity state wave functions. The calculated acceptor levels in Si, GaAs and an isovalent bound state of GaP are in excellent agreement with experiments with a root-mean-square error of 8.4 meV.

Lead iodide is studied as an ultra-fast scintillator for time-of-flight medical applications. The ultra-fast scintillation and near band-edge emission of undoped PbI2 at low temperatures are attributed to a shallow native donor-acceptor recombination. We therefore systematically studied the electronic structure and formation energies of donor and acceptor levels introduced by isolated intrinsic defects and defect complexes in PbI2. Our study shows that only complexes such as Schottky defects can provide both shallow donors and shallow acceptors explaining the near band edge emission.

Detecting Genomic Insertions and Deletions in the Cloud with MapReduce and Cloudbreak

Tuesday, March 5, 1:00–2:00 pm, 465H Soda Hall, UC Berkeley
Chris Whelan and Kemal Sonmez, Oregon Health and Science University

The detection of genomic structural variations remains one of the most difficult challenges in analyzing high-throughput sequencing data. Considering multiple mappings of all reads, rather than only uniquely mapped discordant fragments, can improve the performance of read-pair based detection methods. However, the computational requirements for creating, storing, and processing large scale data sets with multiple mappings can be formidable. Meanwhile, the growing size and number of sequencing data sets have led to intense interest in distributing computation for genomic analyses to cloud or commodity servers. MapReduce, via its Hadoop implementation, is becoming a standard architecture for distributing processing across such compute clusters.

We have developed a conceptual framework for structural variation detection in Hadoop based on computing local features along the genome. In this framework, we have implemented and evaluated an algorithm for finding deletions and short insertions based on fitting a Gaussian mixture model (GMM) to the distribution of mapped insert sizes spanning each location in the genome. A similar method was used in MoDIL; however, our algorithm and the Hadoop framework drastically reduce the runtime requirements and overall difficulty of using this approach.

On simulated and real data sets of paired-end reads, our algorithm achieves performance similar to or better than a variety of popular structural variation detection algorithms. Cloudbreak performs well on both small (40–100 bp) and medium size (100 bp–25 kb) deletions, and in our simulations has greater sensitivity at most fixed levels of specificity than other methods. We also show increased performance in finding deletions in repetitive areas of the genome, identifying more variants that overlap repeats than other approaches in both simulated and real data. Cloudbreak also outperforms other read-pair based approaches for small insertion detection.

In addition, our algorithm can accurately genotype heterozygous and homozygous deletions and short insertions from diploid samples. Using the parameters computed in fitting the GMM and a simple thresholding procedure, we were able to achieve 88.0% and 94.9% accuracy in predicting the genotype of the true positive deletions we detected in simulated and real data sets, respectively, and 91.2% accuracy on simulated insertions.

On Behavioral Programming

Tuesday, March 5, 1:30–3:00 pm, 430/438 Soda Hall (Wozniak Lounge), UC Berkeley
David Harel, Weizmann Institute of Science

The talk starts from a dream/vision paper I published in 2008, whose title “Can Programming be Liberated/Period?” is a play on that of John Backus’ famous Turing Award lecture (and paper). I will propose that — or rather ask whether — programming can be made a lot closer to the way we humans think about dynamics, and the way we somehow manage to get others (e.g., our children, our employees, etc.) to do what we have in mind. Technically, the question is whether we can liberate programming from its three main straightjackets: (1) having to directly produce a precise artifact in some language; (2) having actually to produce two separate artifacts (the program and the requirements) and having then to pit one against the other; (3) having to program each piece/part/object of the system separately. The talk will then get a little more technical, providing some evidence of feasibility of the dream, via LSCs and the play-in/play-out approach to scenario-based programming, and its more recent Java variant. The entire body of work around these ideas can be framed as a paradigm, which we call behavioral programming.

Tomorrow Is Yesterday (or Nearly So): Historical Lessons That Foretell the Future of Mobile and (Connected) Embedded Systems

Tuesday, March 5, 4:10–5:00 pm, 490H Cory Hall, UC Berkeley
Bob Iannucci, Carnegie Mellon University, Silicon Valley Campus

The evolution of mobile computing strongly resembles the evolution of the three generations of computing that preceded it, but with a few crucial differences. Like the past generations, value shifts between hardware, software and services in fairly predictable ways. Unlike past generations, mobile computing is encumbered both by the complexity of distributed computing and by the hard limits imposed by physics and human physiology. Both mobile and connected embedded computing face issues related to power and scale.

This talk examines the evolution of mobile computing, probes some of the difficult problems at its heart, and identifies a handful of challenges that could drive new research. We will examine some of these in detail. We also examine connected embedded computing and its evolution toward a true platform.

Algebra and Geometry of Tensor Decomposition: Scientific Computing and Matrix Computations Seminar

Wednesday, March 6, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
Luke Oeding, UC Berkeley

Tensors (or multidimensional matrices) represent a type of data structure that is ubiquitous in the sciences. Because of their wide use, there is much interest in understanding their fundamental properties, such as tensor decomposition and tensor rank. Tensor decomposition is a sparse representation of a tensor as a linear combination of rank-one tensors, while tensor rank is the number of summands in a minimal decomposition.

So it is natural to ask, How does one check the rank of a given tensor? Or, how does one find the minimal decomposition of a tensor? Already with these questions many beautiful objects from classical algebraic geometry arise, such as the Segre variety and its secant varieties. Moreover, because of the inherent symmetry, representation theory becomes an important tool.

After discussing some of the applications of tensor decomposition, I will explain the geometric setting together with the algebraic questions we’re interested in. Throughout, I will describe recent contributions to the theory (finding equations of secant varieties) and practice (the development of new algorithms) related to tensor decomposition.

Special EECS Seminar: Compression and Modern Data Processing

Wednesday, March 6, 1:00–2:00 pm, 540A/B Cory Hall (DOP Center Conference Room), UC Berkeley
Thomas Courtade, Stanford University

At first glance, modern applications of data processing — such as clustering, querying, and search — bear little resemblance to the classical Shannon-theoretic problem of lossy compression. However, the ultimate goal is the same for modern and classical settings; both demand algorithms which strike a balance between the complexity of the algorithm output and the utility that it provides. Thus, when we attempt to establish fundamental performance limits for these “modern” data processing problems, elements of classical rate distortion theory naturally emerge.

Inspired by the challenges associated with extracting useful information from large datasets, I will discuss compression under logarithmic loss. Logarithmic loss is a penalty function which measures the quality of beliefs a user can generate about the original data upon observing the compressor’s output. In this context, we characterize the tradeoff between the degree to which data can be compressed and the quality of beliefs an end user can produce. Notably, our results for compression under logarithmic loss extend to distributed systems and yield solutions to two canonical problems in multiterminal source coding.

I will also briefly discuss recent work on compression for identification, where we seek to compress data in a manner that preserves the ability to reliably answer queries of a certain form. This setting stands in stark contrast to the traditional compression paradigm, where the goal is to reproduce the original data (either exactly or approximately) from its compressed form. Under certain assumptions on the data sources, we characterize the tradeoff between compression rate and the reliability at which queries can be answered.

Can We Computerize an Elephant?

Wednesday, March 6, 2:00–3:30 pm, 405 Soda Hall, UC Berkeley
David Harel, Weizmann Institute of Science

The talk shows how techniques from computer science and software engineering can be applied beneficially to research in the life sciences. We discuss the idea of comprehensive and realistic modeling of biological systems, where we try to understand and analyze an entire system in detail, utilizing in the modeling effort all that is known about it. I will address the motivation for such modeling and the philosophy underlying the techniques for carrying it out, as well as the crucial question of when such models are to be deemed valid, or complete. The examples will be from among the biological modeling efforts my group has been involved in: T-cell development, lymph node behavior, organogenesis of the pancreas, rat whisking, cancer development, and fate determination in the reproductive system of the C. elegans nematode worm. The ultimate long-term “grand challenge” is to produce an interactive, dynamic, computerized model of an entire multi-cellular organism, such as the C. elegans, which is complex, but well-defined in terms of anatomy and genetics.

EECS Colloquium: Software Is Rebooting Journalism: Data Mining and Visualization in the Public Interest

Wednesday, March 6, 4:00–5:00 pm, 306 Soda Hall (HP Auditorium), UC Berkeley
Jeff Larson, News Applications Developer, ProPublica

ProPublica is a nonprofit news outlet in New York City that publishes investigative journalism. They also have a team of seven News Application developers who create software that tells stories and finds meaningful stories in data. Their News Applications include databases such as Dollars for Doctors, which lists pharmaceutical company payments to doctors, and the Message Machine, an application that uses Natural Language Processing and Machine Learning to reverse engineer political email targeting. Jeff Larson will talk about the evolution of this new field of journalism and show off a few of ProPublica’s News Applications.

Destination Europe for Researchers: Marie Curie Initiative, European Research Council and All That!

Thursday, March 7, 11:30 am–1:00 pm, Sutardja Dai Hall, Bechtel Boardroom (Room 630), UC Berkeley
Michel Cosnard, Inria Chairman and CEO

In this talk, we will present two major European initiatives addressed to researchers of any nationality at all stages of their careers and also introduce some facilities set up by Inria in order to attract the best ones in its research groups.

The first initiative called the “Marie Curie Actions for worldwide researchers and institutions” concerns all domains of Science and is mainly devoted to Mobility (international, intersectoral and interdisciplinary). It encompasses four main objectives: initial training of researchers, lifelong training and career development, industry dimension and international dimension with “world fellowship”. We will present the major characteristics of these objectives.

The second major initiative is the European Research Council (ERC), which aims at promoting Europe as a prime location for top scientists in frontier research. This initiative is part of the 7th EU Research Framework Programme and has been allotted a significant budget of $10 billion for the period 2007–2013. Excellence is the only criterion and ERC is opened to researchers from anywhere in the world wishing to do research in Europe. We will go into the details of this program, which is already funding more than 2000 researchers (starting and advanced) in major research centers and Universities in Europe.

Finally, we will give some insight on the way Inria uses these instruments in order to promote excellence in its research groups. For example, Inria is the Institution that hosts the highest number of ERC laureates in Computer Science. Independently, Inria is also setting up its own schemes in order to attract promising researchers in its fields of excellence; of course, these schemes have been designed in strong synergy with ERC instruments.

Design Mining the Web

Thursday, March 7, 3:00–4:00 pm, 430/438 Soda Hall (Wozniak Lounge), UC Berkeley
Ranjitha Kumar, Stanford University

The Web has transformed the nature of creative work. For the first time, millions of people have a direct outlet for sharing their creations with the world. As a result, the Web has become the largest repository of design knowledge in human history, and the ensuing “democratization of design” has created a critical feedback loop, engendering a new culture of reuse and remixing.

The means and methods designers employ to draw on prior work, however, remain mostly informal and ad hoc. How can content producers find relevant examples amongst hundreds of millions of possibilities and leverage existing design practice to inform and improve their creations? My research explores data-driven techniques for working with examples at scale during the design process, automating search and curation, enabling rapid retargeting, and learning generative probabilistic models to support new design interactions. Knowledge discovery and data mining have revolutionized informatics; in this talk, I’ll discuss what we can learn from mining design.

Emerging Device Technology for Future Computing Paradigms

Friday, March 8, 1:00–2:00 pm, 521 Cory Hall, UC Berkeley
Shimeng Yu, Stanford University

As the CMOS scaling will soon approach its physical limit, it is necessary to think about new computing paradigms that continue improving the system performance and reducing the power consumption. We are entering the “Big Data” era which puts data at the center of computing. The size of data sets is rapidly exceeding exascale. Therefore, increasing the data bandwidth of the memory sub-system is demanding. Emerging memory technologies such as oxide based resistive random access memory (RRAM) may bring enormous opportunities for evolution or revolution of today’s computing system architecture.

In the first part of the talk, I will discuss the physical mechanism of resistive switching phenomenon in oxides. To elucidate the oxide RRAM device physics, various characterization techniques were employed to identify the electron conduction mechanism and oxygen ion migration dynamics. Then a Kinetic Monte Carlo numerical simulator was developed for understanding the variability of resistive switching. Finally a compact device model was developed for circuit/system-level simulations.

In the second part of the talk, I will explore the potential applications of oxide RRAM technology, including (1) cost-effective 3D integration of RRAM cross-point array as NAND FLASH replacement; (2) non-volatile reconfigurable memory for FPGA application; (3) artificial synaptic device for implementing adaptive learning algorithms in bio-inspired neuromorphic computing system.

The directions of future research are to first change today’s memory system hierarchy, then bring non-volatility and programmability into the logic, and eventually break the Von Neumann bottleneck between the computation and storage. To accomplish these goals, future research calls for an active interaction between fundamental physics exploration, material/device engineering, and peripheral circuitry and system architecture co-design.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.