A-Z Index | Phone Book | Careers

InTheLoop | 10.15.2012

October 15, 2012

The Path a Proton Takes Through a Fuel Cell Membrane

Experts believe that fuel cells may someday serve as clean energy conversion devices for transportation and other applications, but there are still some design issues that engineers need to sort out before this can happen. One challenge is to develop an inexpensive and robust polymer membrane that effectively conducts protons. In a step toward achieving that goal, researchers are running computer simulations at NERSC to understand how protons move through different polymer membranes. Read more.

Making Sense of Big Data from Supercomputers

Signal Online reports that big data can mean big problems for the people trying to derive usable information from a large number of sources. Since coming into existence in March, the Scalable Data Management, Analysis and Visualization Institute (SDAV) has made strides to resolve this issue for programs running on supercomputers. The young organization’s efforts have applicability to a variety of scientific fields, and its tools are open source so others can take advantage of the findings. SDAV Director Arie Shoshani of CRD is interviewed in the article. Read more.

C3 Postdoc Lukić Co-Authors Paper on Using Cosmic Rays to Assess Damaged Nuclear Reactors

Zarija Lukić, a postdoc in CRD’s Computational Cosmology Center (C3), is the co-author of a paper describing an innovative approaching for assessing the damage to Japan’s Fukushima Daiichi nuclear reactors: use cosmic rays to obtain a radiographic image of the reactor cores. Konstantin Borozdin of Los Alamos National Lab is lead author of the paper, which appeared Oct. 11 in Physical Review Letters and is an “editor’s pick.”

According to the synopsis of the paper, “Cosmic rays are charged particles, mostly protons, coming from outer space and hitting the Earth at high speeds. Colliding with molecules in the atmosphere, they generate a shower of other particles. These include muons, sort of heavier versions of electrons that, if sufficiently fast, can penetrate many meters into materials. This property first enabled an intriguing imaging application in 1969, when a team led by Luis Alvarez used muon radiography to search for hidden chambers in the Egyptian pyramids of Giza.” The paper can be accessed at http://prl.aps.org/abstract/PRL/v109/i15/e152501 through Berkeley Lab’s journal subscription.

Open House Visitors Experience “Science at Warp Speed”

Nearly 6,000 members of the community came to the Hill on Saturday, gaining science knowledge and learning more about Berkeley Lab at the annual Open House. Virtually every division hosted exhibits, including the Computational Research Division, NERSC and ESnet.

“Science at Warp Speed: A Journey Through Space and Time in 3D” was the theme of the Computing Sciences exhibit. Volunteers for the day included David Skinner, Terence Sun, Tony Wang, Lauren Rotman, Eli Dart, Eric Roman, Orianna Demassi, Andrew Uselton, Daniela Ushizima, Deb Agarwal, Jon Bashor, Margie Wylie and Linda Vu. Exhibit support was also provided by Joerg Meyer and Yushu Yao.

Pictures can be found on the Berkeley Lab Computing Sciences Facebook Page.

PPPL Wins Grant to Develop Plasma Edge Simulation Code, Will Use NERSC Computers

A center based at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) has won a highly competitive $12.25 million grant to develop computer codes to simulate a key component of the plasma that fuels fusion energy. The five-year DOE award could produce software that helps researchers design and operate facilities to create fusion as a clean and abundant source of energy for generating electricity. Some of the computation for the project will be done at NERSC. Read more.

Lab Women Sought as Mentors for High School Girls in Developing Mobile Apps

The Technovation Challenge, a 12-week program in which teams of high school girls compete to develop cool mobile apps, is again seeking mentors. A mentor recruiting party will be held from 6:00–8:00 pm Tuesday, Oct. 16, in San Francisco. In spring 2012, Berkeley Lab hosted more than 60 girls in 11 teams from Berkeley and Albany high schools through the program, and more than 20 women at the Lab volunteered as mentors. One team from Albany High went on to win both the regional and national competition with their app, which combined social networking with preparation for college-level advanced placement tests. Read more about the Technovation Challenge and mentor recruiting party.

This Week’s Computing Sciences Seminars

Recent Progresses on Linear Programming and the Simplex Method

Monday, October 15, 2:30–3:30 pm, 521 Cory Hall, UC Berkeley
Yinyu Ye, Stanford University

Linear programming (LP), together with the simplex method, remain a core topic in Operations Research, Computer Science and Mathematics since 1947. Due to the relentless research effort, a linear program can be solved today one million times faster than it was done thirty years ago. Businesses, large and small, now use LP models to control manufacture inventories, price commodities, design civil/communication networks, and plan investments. LP even becomes a popular subject taught in under/graduate and MBA curriculums, advancing human knowledge and promoting science education. The aim of the talk is to describe several recent exciting progresses on LP and the simplex method, include counter examples to the Hirsch conjecture, some pivoting rules and their exponential behavior, strongly polynomial-time bounds of the simplex and policy-iteration methods for solving Markov decision process (MDP) and turn-based zero-sum game with constant discount factors, the strongly polynomial-time complexity of the simplex method for solving deterministic MDP regardless discounts, etc.

Par Lab Seminar: Programming with People: Integrating Human-Based and Digital Computation

Tuesday, October 16, 1:00–2:30 pm, 430 Soda Hall (Wozniak Lounge), UC Berkeley
Emery Berger, University of Massachusetts, Amherst

Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon's Mechanical Turk make it possible to harness human-based computational power on an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for any given task. Crucially, the results of human-based computations are not necessarily reliable, both because human skills and accuracy vary widely, and because workers have a financial incentive to minimize their effort.

This talk presents AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls, which can be intermixed freely with traditional functions. This abstraction allows AutoMan programmers to focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. The AutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget.

AutoMan is available for download at http://www.automan-lang.org.

Automatic Undo for Cloud Management via AI Planning

Wednesday, October 17, 12:00–1:00 pm, 310 Soda Hall, UC Berkeley
Hiroshi Wada, NICTA

Cloud computing provides infrastructure programmatically managed through simple APIs. This improves the efficiency of system operations; but having simple powerful system operations may increase the chances of human-induced faults, which play a large role in system dependability. To improve the dependability in cloud, it would be helpful if the platform allows users to rollback to recover from failure. An obvious approach is to execute a sequence of compensating operations in reverse chronological order. However, on cloud platforms, this is not always feasible. Moreover, cloud APIs are often error-prone: we have frequently observed failures on major commercial cloud platforms. The rollback therefore must handle failures that occur during the undo. To improve the dependability of cloud-based systems, we use an AI planner to automate discovering an appropriate sequence of operations to rollback the system status. Our planner scales well as the number of operations needed increases. This work was inspired by our experience in developing tool support for users of cloud platforms (see http://yuruware.com). This talk is based on the paper presented at HotDep ’12.

Reliable Condition Number Estimation with Random Sampling: Scientific Computing and Matrix Computations Seminar

Wednesday, October 17, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
Ming Gu, UC Berkeley and LBNL/CRD

Condition number estimation has traditionally been regarded as a “quick and dirty” way to compute some estimation of the condition number of a given matrix. Typical condition number estimators usually do a good job of estimating the condition number to within a small factor, but can fail to estimate the condition number to any accuracy.

In this talk we show how to use randomized sampling to estimate condition numbers. Except a tiny failure probability, our algorithm computes reliable condition estimators. We also generalize this algorithm to efficiency and compute the p-norm of a given matrix.

Joint work with Chris Melgaard.

Making Geo-Replicated Systems Fast as Possible, Consistent When Necessary

Wednesday, October 17, 3:00–4:00 pm, Soda Hall, Wozniak Lounge, UC Berkeley
Allen Clement, Max Planck Institute for Software Systems

Online services distribute and replicate state across geographically diverse data centers and direct user requests to the closest or least loaded site. While effectively ensuring low latency responses, this approach is at odds with maintaining cross-site consistency. We make three contributions to address this tension. First, we propose RedBlue consistency, which enables blue operations to be fast (and eventually consistent) while the remaining red operations are strongly consistent (and slow). Second, to make use of fast operation whenever possible and only resort to strong consistency when needed, we identify conditions delineating when operations can be blue and must be red. Third, we introduce a method that increases the space of potential blue operations by breaking them into separate generator and shadow phases. We built a coordination infrastructure called Gemini that offers RedBlue consistency, and we report on our experience modifying the TPC-W and RU-BiS benchmarks and an online social network to use Gemini. Our experimental results show that RedBlue consistency provides substantial performance gains without sacrificing consistency.

EECS Colloquium: Reinventing Education

Wednesday, October 17, 4:00–5:00 pm, 306 Soda Hall (HP Auditorium), UC Berkeley
Anant Agarwal, MIT, President of edX

Prof. Agarwal will be speaking about edX, a not-for-profit enterprise of its founding partners, Harvard University and the Massachusetts Institute of Technology, that features learning designed specifically for interactive study via the web, Massive Open Online Courses (MOOCs), and online education.

DREAM Seminar: Enclosing Hybrid Behavior

Wednesday, October 17, 4:10–5:00 pm, 540 Cory Hall, UC Berkeley
Walid Taha, Halmstad University, Sweden, and Rice University, USA

Rigorous simulation of hybrid systems relies critically on having a semantics that constructs enclosures. Edalat and Pattinson's work on the domain-theoretic semantics of hybrid systems almost provides what is needed, with two exceptions. First, domain-theoretic methods leave many operational concerns implicit. As a result, the feasibility of practical implementations is not obvious. For example, their semantics appears to rely on repeated interval splitting for state space variables. This can lead to exponential blow up in the cost of the computation. Second, common and even simple hybrid systems exhibit Zeno behaviors. Such behaviors are a practical impediment because they make simulators loop indefinitely. This is in part due to the fact that existing semantics for hybrid systems generally assume that the system is non-Zeno.

The feasibility of reasonable implementations is addressed by specifying the semantics algorithmically. We observe that the amount of interval splitting can be influenced by the representation of function enclosures. Parameterizing the semantics with respect to enclosure representation provides a precise specification of the functionality needed from them, and facilitates studying their performance characteristics. For example, we find that non-constant enclosure representations can alleviate the need for interval splitting on dependent variables.

We address the feasibility of dealing with Zeno systems by taking a fresh look at event detection and localization. The key insight is that computing enclosures for hybrid behaviors over intervals containing multiple events does not necessarily require separating these events in time, even when the number of events is unbounded. In contrast to current methods for dealing with Zeno behaviors, this semantics does not require reformulating the hybrid system model specifically to enable a transition to a post-Zeno state.

The new semantics does not sacrifice the key qualities of the original work, namely, convergence on separable systems.

TRUST Security Seminar: Eternal Sunshine of the Spotless Machine: Helping Computers Forget

Thursday, October 18, 1:00–2:00 pm, 430 Soda Hall (Wozniak Lounge), UC Berkeley
Vitaly Shmatikov, The University of Texas at Austin

Modern computers keep long memories. Traces of user activities — visited websites, voice-over-IP conversations, watched videos — remain in application and OS memory, file system, device drivers, memory of peripheral devices, etc. Even Web browsers that support "private" or "incognito" mode leave evidence of user behavior in system resources outside their control.

This talk will present the design and implementation of Lacuna, a system that allows users to run programs in "private sessions." After the session is over, all evidence of the program's execution is erased. The key abstraction in Lacuna is an "ephemeral channel", which allows the protected program to talk to peripheral devices while making it possible for all memories of this communication to be deleted from the host. Lacuna can run full-system applications with protected graphics, sound, USB, and network channels with only a modest CPU overhead.

Link of the Week: Reinventing Society in the Wake of Big Data

Alex “Sandy” Pentland is a pioneer in big data, computational social science, mobile information systems, and technology for developing countries. He directs MIT's Human Dynamics Laboratory and the MIT Media Lab Entrepreneurship Program. In a conversation with Edge, he discusses “Reinventing Society in the Wake of Big Data.” Some excerpts:

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant! As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart….

Adam Smith and Karl Marx were wrong, or at least had only half the answers. Why? Because they talked about markets and classes, but those are aggregates. They're averages.

While it may be useful to reason about the averages, social phenomena are really made up of millions of small transactions between individuals. There are patterns in those individual transactions that are not just averages, they're the things that are responsible for the flash crash and the Arab spring. You need to get down into these new patterns, these micro-patterns, because they don't just average out to the classical way of understanding society. We're entering a new era of social physics, where it's the details of all the particles—the you and me—that actually determine the outcome….

We can potentially design companies, organizations, and societies that are more fair, stable and efficient as we get to really understand human physics at this fine-grain scale.

This conversation is part of the series “Computational Social Science @ Edge.”

About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.