A-Z Index | Phone Book | Careers

InTheLoop | 03.11.2013

The Weekly Newsletter of Berkeley Lab Computing Sciences

March 11, 2013

In the News (1): HPCwire: Scalable Data Race Detection

HPCwire reports that a team of researchers from Berkeley Lab and the University of California Berkeley are investigating cutting-edge programming languages for HPC. These are languages that promote hybrid parallelism and shared memory abstractions using a global address space. It’s a programming style that is especially prone to data races that are difficult to detect, and prior work in the field has demonstrated 10x-100x slowdowns for non-scientific programs.

In a recent paper, the computer scientists present what they say is “the first complete implementation of data race detection at scale for UPC programs.” The paper, “Scalable data race detection for partitioned global address space programs“ by Chang-Seo Park, Koushik Sen, and Costin Iancu, is published in the Proceedings of the 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Read more.

In the News (2): CNN: The Historical Analogs of Brilliant Women

It’s no secret that we still have a long way to go before achieving gender equity in the fields collectively known as STEM: science, technology, engineering, and mathematics. But what better way to derive inspiration than to reflect upon those who have managed to buck the trend? In honor of Women’s History Month, CNN is taking a look at contemporary innovators in STEM and their historical analogs — including Associate Lab Director Kathy Yelick and her programming language predecessor, Grace Murray Hopper. Read more.

In the News (3): MarketWatch Quotes Greg Bell, Brent Draney on Ultra-Fast Networks

An article in Dow Jones MarketWatch gives an overview of the recent 100G and Beyond Workshop, sponsored by the California Institute of Telecommunications and Information Technology (Calit2), the Energy Sciences Network (ESnet), and the Corporation for Education Network Initiatives in California (CENIC). The workshop examined 100-gigabit networking and the ways in which it will impact areas as diverse as data-intensive science, health care, media arts applications, smart manufacturing, and more. ESnet Director Greg Bell and NERSC Networking, Security, and Servers Group Lead Brent Draney are among the people quoted in the article. Read more.

In the News (4): In Computers We Trust? David Bailey Comments

An article in the Simons Foundation News, “In Computers We Trust?” asks whether mathematics can still be done without computers. “The time when someone can do real, publishable mathematics completely without the aid of a computer is coming to a close,” comments David Bailey, head of CRD’s Complex Systems Group and the author of several books on computational mathematics. “Or if you do, you’re going to be increasingly restricted into some very specialized realms.”

The use of computers is both widespread and underacknowledged. According to Bailey, researchers often de-emphasize the computational aspects of their work in papers submitted for publication, possibly to avoid encountering friction. But computer code is fallible, because humans write it. Coding errors (and the difficulty in detecting them) have occasionally forced mathematicians to backpedal.

Most of the software currently used by mathematicians can’t be verified. The best-selling commercial math programming tools — Mathematica, Maple, and Magma (each costing about $1,000 per professional license) — are closed source, and bugs have been found in all of them.

In December, Bailey and dozens of other researchers met at the Institute for Computational and Experimental Research in Mathematics, a new research institute at Brown University, to discuss standards for reliability and reproducibility. From myriad issues, one underlying question emerged: In the search for ultimate truth, how much can we trust computers? Read more.

IgniteCamp “Imagining What’s Possible” in San Leandro

ESnet’s Eric Pouyoul was one of a handful of speakers invited to “Imagining What’s Possible,” an event organized by US Ignite, a nonprofit organization that fosters the creation of next-generation Internet applications that provide transformative public benefit. In his 30-minute presentation, Pouyoul demonstrated the case for more bandwidth by showing the same scientific dataset at 10 gigabits per second and 100 Gbps, with clearly different levels of resolution and detail. He also agreed with several other speakers about the need to remove the complexity that haunts today’s networks. A video of his talk can be viewed here.

The event was US Ignite’s first “BarCamp,” a user-organized meeting, in early February in San Leandro. The event drew more than 60 attendees of many backgrounds.

According to its website, US Ignite engages diverse public and private leaders to “ignite” the development and deployment of new apps with profound impact on how Americans work, live, learn, and play. The organization took its inspiration from the White House Office of Science and Technology Policy and the National Science Foundation and convenes partners in industry, academia, and government to identify and share best practices and resources. Their applications are focused in six areas of national priority:

  • Education and workforce
  • Energy
  • Health
  • Public safety
  • Transportation
  • Advanced manufacturing

Math Films Mathathon March 18 and 20 at SF’s Roxie Theater

Zala Films is pleased to announce a Two-Night Mathathon: Four Films by George Csicsery at the Roxie Theater, 3117 16th Street in San Francisco.

Monday, March 18

Taking the Long View: The Life of Shiing-Shen Chern, 7:45 pm

“Taking the Long View: The Life of Shiing-Shen Chern is George Csicsery’s most recent math biopic in its Bay Area premiere. Chern lived in Berkeley for nearly 40 years. A co-founder of the Mathematical Sciences Research Institute in Berkeley, Chern was instrumental in the reintroduction of scientific research into China. (57 minutes.)

Julia Robinson and Hilbert’s Tenth Problem, 6:30 and 9:00 pm

Julia Robinson and Hilbert’s Tenth Problem” wraps the biography of a pioneer among American women in mathematics around an unfolding 70-year quest to solve a famous problem posed by German mathematician David Hilbert in 1900. The search involves Russians and Americans working together at the height of the Cold War without being allowed to meet. The answer, when it came, contributed to the development of the modern computer. (57 minutes)

Wednesday, March 20

N Is a Number: A Portrait of Paul Erdős, 6:00 and 9:00 pm

“N is a Number: A Portrait of Paul Erdős is the iconic math biography, and Erdős, its subject, has emerged as one of the 20th century’s greatest mathematicians. Everyone knows about Kevin Bacon and Erdős. Their connection is in this film, but if you don’t see the beginning, you might miss it. Erdős died in 1996. His 100th birthday will be celebrated by mathematicians around the world in March 2013. (57 minutes)

Hard Problems: The Road to the World’s Toughest Math Contest, 8:00 pm

“Hard Problems: The Road to the World’s Toughest Math Contest” is the “Spellbound” of mathematics. The competition is grueling, but the contestants are a fascinating cross section of the best and brightest high school students in the country. The International Math Olympiad pits six students, who are prepared in Lincoln, Nebraska, against their peers from 92 countries in an exotic setting far from home. (81 minutes)

This Week’s Computing Sciences Seminars

Decoupling Algorithms from the Organization of Computation for High-Performance Graphics and Imaging

Monday, March 11, 1:00–2:00 pm, 430/438 Soda Hall (Wozniak Lounge), UC Berkeley
Jonathan Ragan-Kelley, MIT

Future graphics and imaging applications—from photorealistic real-time rendering, to 4D light field cameras and pervasive sensing, to multi-material 3D printing—demand orders of magnitude more computation than we currently have. The efficiency and performance of an application are determined by the algorithm and the hardware architecture on which it runs, but critically also by the organization of computations and data. Real graphics and imaging applications have complex dependencies, and are limited by locality (the distance over which data has to move, e.g., from nearby caches or far away main memory) and synchronization. Increasingly, the cost of communication—both within a chip and over a network—dominates computation and power consumption, and limits the gains realized from shrinking transistors.

This talk will focus on the Halide language and compiler for image processing. Halide explicitly separates what computations define an algorithm from the choices of execution structure which determine parallelism, locality, memory footprint, and synchronization. For image processing algorithms with the same complexity—even the exact same set of arithmetic operations and data—executing on the same hardware, the order and granularity of execution and placement of data can easily change performance by an order of magnitude because of locality and parallelism. I will show that, for data-parallel pipelines common in graphics, imaging, and other data-intensive applications, the organization of computations and data for a given algorithm is constrained by a fundamental tension between parallelism, locality, and redundant computation of shared values. I will present a systematic model of “schedules” which explicitly trade off these pressures by globally reorganizing the computations and data for an entire pipeline, and an optimizing compiler that synthesizes high performance implementations from a Halide algorithm and a schedule. The end result is much simpler programs, delivering performance often many times faster than the best prior hand-tuned C, assembly, and CUDA implementations, while scaling across radically different architectures, from ARM cores to massively parallel GPUs.

I will also touch on other work I have done building compilers, systems, and architectures for offline and real-time rendering, scientific computing, and multi-material 3D printing.

Catching Stem and Progenitor Cells with Boolean Logic

Monday, March 11, 3:00–4:00 pm, 290 Hearst Memorial Mining Building, UC Berkeley
Debashis Sahoo, Siebel Fellow, Stanford University Institute for Stem Cell Biology and Regenerative Medicine

Many, if not all organs and tissues consist of self-renewing stem cells that give rise to distinct, sequential progenitors with increasingly limited development potential, ultimately producing functional mature cells. All normal, cancer and other diseased tissues contain a diversity of different cell types with distinct morphological features. The identification and characterization of these different cell types within normal and diseased tissue are not only critical for the understanding of underlying biology but also in developing more effective therapeutic strategies. Previous attempts to identify markers for cells at hierarchical stages of tissue differentiation involved either 1) large screening studies using antibody libraries or gene expression arrays, or 2) focused trials of established markers identified in other normal and diseased tissues. Unfortunately, these approaches are insufficient to trace complex cellular differentiation stages, and thus most often fails. Therefore a systematic approach to identify cells within tissue differentiation hierarchies is required.

We developed systematic computational approaches to identify markers of stem and progenitor cells by analyzing publicly available, high-throughput gene expression datasets consisting of more than 2 billion measurement points. We developed a set of tools - StepMiner, BooleanNet (a network of Boolean implications), MiDReG (Mining Developmentally Regulated Genes) that uses Boolean implications to predict genes in developmental pathways, and HEGEMON (Hierarchical Exploration of Gene Expression Microarray Online) to identify genes expressed in the stem and progenitor cells in both normal and malignant tissue development. We demonstrated that coordinated use of these tools could predict genes involved in developmental stages in human normal and cancer tissues. We use examples of human B cells, bladder cancer and colon cancer to show the power of this computational approach. For the cancer tissues, the newly identified genes using this approach predicted patient outcomes robustly, using samples ranging from early stage to late advanced disease stages. This approach identifies diagnostic and prognostic value in situ for immediate translation into clinical applications.

Randomized Matrix Algorithms and Large-Scale Scientific Data Analysis

Tuesday, March 12, 3:00–4:00 pm, 50B-4205
Michael W. Mahoney, Stanford University

Matrix problems are ubiquitous in many large-scale scientific data analysis applications; and in recent years randomization has proved to be a valuable resource for the design of better algorithms for many of these problems. Depending on the situation, better might mean faster in worst-case theory, faster in high-quality numerical implementation, e.g., in RAM or in parallel and distributed environments, or more useful for downstream domain scientists. This talk will describe the theory underlying randomized algorithms for matrix problems such as least-squares regression and low-rank matrix approximation; and it will describe the use of these algorithms in large-scale scientific data analysis and numerical computing applications. Examples of the former include the use of interpretable CUR matrix decompositions to extract informative markers from DNA single nucleotide polymorphism data as well as informative wavelength regions in astronomical galaxy spectra data; and examples of the latter include a randomized algorithm that beats LAPACK on dense overconstrained least-squares problems for data in RAM, and a randomized algorithm to solve the least absolute deviations problem on a terabyte of distributed data.

Pure Storage Flash Array: An Embedded System on Steroids

Tuesday, March 12, 4:10–5:00 pm, 490H Cory Hall, UC Berkeley
Marco Sanvido, Pure Storage

The storage industry is currently in the midst of a flash revolution. Today’s smartphones, cameras, USB drives, and even laptop computers all use flash storage. However, the $30 billion a year enterprise storage market is still dominated by spinning disk. Flash has large advantages in speed and power consumption, but its disadvantages (price, limited overwrites, large erase block size) have prevented it from being a drop-in replacement for disk in a storage array. The question facing the industry is how to build a competitive hardware-software solution that turns flash into performant, reliable storage scaling to hundreds of TBs and beyond.

In this talk, we describe the design and technical challanges of the Pure FlashArray, an enterprise storage array built from the ground up around consumer flash storage. The array and its software, Purity, play to the advantages of flash while minimizing the downsides. Purity performs all writes to flash in multiples of the erase block size, and it keeps metadata in a key-value store that persists approximate answers, further reducing writes at the cost of extra (cheap) reads. Purity also reduces data stored on flash through a range of techniques, including compression, deduplication, and thin provisioning. The net result is a flash array that delivers a sustained read-write workload of over 100,000 4kb I/O requests per second while maintaining uniform sub-millisecond latency. With many customers seeing 4x or greater data reduction, the Pure FlashArray ends up being cheaper than disk as well.

Implicit Sampling and Data Assimilation: Scientific Computing and Matrix Computations Seminar

Wednesday, March 13, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
Alexandre J. Chorin, UC Berkeley/LBNL CRD

In many problems of computational science one wants to sample a given probability density (pdf), i.e., use a computer to construct a sequence of independent random vectors $x_i, i=1,2,\dots$, whose histogram converges to the given pdf. This can be difficult because the sample space is typically huge, and more important, because the portion of the space where the density is significant can be very small, so that one may miss it by an ill-designed sampling scheme. Indeed, Markov-chain Monte Carlo (MCMC), the most widely used sampling scheme, can be thought of as a search algorithm, where one starts at an arbitrary point and one advances step-by-step towards the high probability region of the space. This can be expensive, in particular because one is typically interested in independent samples, while the chain has a memory.

I will present an alternative, in which samples are found by solving an algebraic equation with a random input rather than by following a chain; each sample is independent of the previous samples. I will explain the construction in the context of numerical integration, and then apply it to filtering and data assimilation, where success requires efficient sampling.

(Joint work with M. Morzfeld and X. Tu.)

EECS Colloquium: Data-Driven Image Understanding

Wednesday, March 13, 4:00–5:00 pm, 306 Soda Hall (HP Auditorium), UC Berkeley
Alexei Efros, Carnegie Mellon University

Reasoning about a visual scene from a photograph is an inherently ambiguous task because an image in itself does not carry enough information to disambiguate the world that it is depicting. Yet, humans have no problems understanding a photograph, seamlessly inferring a plethora of information about the physical space of the scene, the depicted objects and their relationships within the scene, rough scene illumination, cues about surface orientations and material properties, event hints about the geographic location. This remarkable feat is largely due to the vast prior visual experience that humans bring to bear on the task. How can we help computers do the same?

Research in my lab over the past decade has focused on the use of large amounts of visual data, both labeled and unlabeled, as a way of injecting visual experience into the task of computational image understanding. In this talk, I will first show some examples of the power of Big Visual Data to address complex visual tasks with surprisingly simple algorithms. I will then describe our data-driven techniques for gaining a deeper understanding of the scene by parsing the image into its constituent elements to infer information about its 3D geometric, photometric, and semantic properties. Applications of our techniques will be demonstrated for several practical tasks including single-view 3D reconstruction, object detection, visual geo-location, and image-based computer graphics.

Special EECS Seminar: Informational Limits of Systems with Humans and Machines

Thursday, March 14, 3:00–4:00 pm, 430Soda Hall (Wozniak Lounge), UC Berkeley
Lav Varshney, IBM T. J. Watson Research Center

New communication and computing technologies are enabling people to come together to achieve higher quality of life and to develop innovative products and services. The down-to-earth problem of building these informational systems, however, is entangled with theoretical questions of what is possible and what is impossible when bringing together ubiquitous informational technologies with the people and organizations they are transforming. Group decision-making, information factories, and crowdsourcing are all ways of structuring systems to draw on the strengths of many, allowing collective intelligence rather than cacophony to emerge.

In this talk, I will discuss a mathematical model of human decision-making and show the benefits of diversity in groups. Next I will present a model of crowdsourcing with strategic players, and further show empirical evidence from a large-scale system we built that indicates the importance of drawing human attention. A thermodynamic interpretation leads us to ask: is there a Carnot limit for knowledge work? In closing, I discuss how fundamental limits on the transmission of information provide insight into systems with humans and machines.

Time Domain Decomposition Methods for Nonlinear Systems of ODE

Friday, March 15, 10:00–11:00 am, 50F-1647
Patrice Linel, University of Rochester Medical Center

We propose to design domain decomposition methods in time for the resolution of EDO. We reformulate the initial value problem in a boundary values problem on the symmetrized time interval, under the assumption of reversibility of the flow. We develop two methods, the first related with a Schur complement method, the second based on a Schwarz type method for which we show convergence, and the acceleration by the Aitken method within the linear framework. In order to accelerate the convergence of the latter within the non-linear framework, we introduce the techniques of extrapolation and of acceleration of the convergence of non-linear sequences. We show that Dirichlet-Neumann boundary conditions for non linear ODEs are not optimal and we propose nonlinear boundary conditions that accelerate the convergence and simplify the methodology also.

Unleashing the Power of Big Visual Data

Friday, March 15, 12:00–1:00 pm, 510 Soda Hall (Visual Computing Lab), UC Berkeley
Jia Deng, Vision Lab, Stanford University

Big data is revolutionizing the field of computing, especially computer vision. The digital universe is dominated by visual data, which according to Cisco will take up more than 80% of the web traffic by 2016. Big visual data captures the complexity as well as the richness of the visual world and opens up unprecedented opportunities for endowing machines with superior visual intelligence.

In this talk I will present my research that centers around big visual data. I will start with an overview of the ImageNet project, which harvests big visual data through large-scale crowdsourcing. I will then show how large-scale benchmarking studies based on ImageNet have shed light on future research directions. Armed with the insights from the benchmarking studies, I will next demonstrate how to improve recognition of fine-grained, sub-ordinate categories via a computer game that harvests novel forms of data from crowds. I will also discuss ways to exploit the knowledge structure of big data to improve recognition. In particular, I will present a provably optimal approach to optimizing the trade-offs between accuracy and specificity, which leads to a reliable recognition engine that recognizes over 10K+ categories.

Link of the Week: How Math Makes the Movies

Pixar Senior Scientist Tony DeRose wanders between rows at New York’s Museum of Mathematics. In a brightly-colored button-up T-shirt that may be Pixar standard issue, he doesn’t look like the stereotype of a scientist. He greets throngs of squirrely, nerdy children and their handlers — parents and grandparents, math and science teachers — as well as their grown-up math nerd counterparts, who came alone or with their friends. One twentysomething has a credit for crowd animation on Cars 2; he’s brought his mom. She wants to meet the pioneer whose work lets her son do what he does.

The topic of DeRose’s lecture is “Math in the Movies.” This topic is his job: translating principles of arithmetic, geometry, and algebra into software that renders objects or powers physics engines. This process is much the same at Pixar as it is at other computer animation or video game studios, he explains; part of why he’s here is to explain why aspiring animators and game designers need a solid base in mathematics. Read more.

About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.