A-Z Index | Phone Book | Careers

InTheLoop | 11.01.2010

November 1, 2010

Alternative Yardstick to Measure the Universe

Astronomers have long relied on stellar explosions called Type Ia supernovae to measure the scale of the cosmos. But Dovi Poznanski of Berkeley Lab’s Computational Cosmology Center (C3) believes that a Type II-P supernovae may now be put to the same use. His most recent findings, co-authored with Peter Nugent of C3 and Alexei Filippenko of UC Berkeley, were published on Oct. 1 in the Astrophysical Journal and featured in the Oct. 23 issue of Nature News.


Chinese’ No. 1 System Called “More Important Than Earth Simulator”

Although the fall TOP500 list has not been officially announced yet, the Wall Street Journal reports that a supercomputer at the Chinese National Supercomputing Center in Tianjin will take the No. 1 spot with a Linpack performance of 2.5 petaflops (and a peak performance of 4.7 petaflops, using only 4 MW of power). It is part of a new breed that exploits graphics chips more commonly used in playing videogames—supplied by Nvidia Corp.—as well as standard microprocessors from Intel Corp. The US, Germany, and the UK had no GPU-equipped systems on the last TOP500 list.

While the communications chips inside the Tianjin machine were proprietary and designed in China, most of the system relies on American-made chips, so U.S. customers could presumably construct a system with similar performance, noted Horst Simon, Deputy Director of Berkeley Lab.

The WSJ quotes Simon as saying that “this is a much more important event than the Earth Simulator,” because while the Japanese system was a single machine, Tianjin is part of a multi-year strategy by China to develop a range of machines to create a dominant position in both military and commercial applications.

See also “New Chinese GPGPU Super Outruns Jaguar” in HPCwire.


Version 3.2 of the perfSONAR pS Performance Toolkit Is Released

The perfSONAR collaboration, which includes ESnet, Internet2, and others, has just announced the release of version 3.2 of the pS Performance Toolkit. The latest version of perfSONAR-PS is available here.

“perfSONAR is critical to helping our constituents achieve acceptable network performance for scientific applications,” commented Eli Dart, network engineer at ESnet, who uses perfSONAR routinely. “The continued progress by the perfSONAR collaboration in developing and deploying practical test and measurement infrastructure is helping our scientists conduct critical research collaborations in many areas including climate, energy, biology and genomics.” Read more.


ASCR Discovery Highlights ISICLES Research

The online magazine ASCR Discovery last week posted a feature article, “Researchers set out to crack ice fracture modeling puzzle,” that explores the many-faceted research project ISICLES — Ice Sheet Initiative for CLimate ExtremeS. A sidebar titled “Program aims to get ice right” quotes Esmond Ng as saying that the Intergovernmental Panel on Climate Change (IPCC)

acknowledged that they did not have the right computing capability or the modeling effort to understand ice sheet dynamics…. The problem of getting an accurate, high-resolution modeling of land ice has not really been looked at in a grand-scale way…. That’s the motivation for ISICLES. The funding supports applied mathematicians and computer scientists so they can work with climate scientists to further advance modeling capability.

Ng, leader of CRD’s Scientific Computing Group, is the PI of Berkeley-ISICLES (BISICLES), an effort to develop high-performance adaptive algorithms for ice sheet modeling. BISICLES is employing and extending existing algorithmic tools, especially adaptive mesh refinement, for high-resolution ice sheet modeling at relatively low computational cost.


NERSC/CASTRO Supernova Image Featured in iSGTW

An image showing a 3D time series of the development and expansion of a supernova shock was featured as “Image of the week” in the October 13 issue of International Science Grid This Week (iSGTW). The image, created by Jason Nordhaus and Adam Burrows of Princeton University, was computed on NERSC’s Franklin system using CASTRO, a program developed by Ann Almgren and John Bell.


Students Invited to Submit Papers for Prize at SIAM CS&E 2011

At the 2011 SIAM CS&E Conference, February 28–March 4, 2011, in Reno, Nevada, the Third BGCE Student Paper Prize will be awarded for outstanding student work in Computational Science and Engineering. Founder of the prize is the Bavarian Graduate School of Computational Engineering (BGCE), a consortium offering an honors track to the best students of three international Master’s programs in Computational Engineering at Technische Universitaet Muenchen (TUM) and Universitaet Erlangen-Nuernberg (FAU).

The prize winner will be invited to spend one week in Bavaria (air fare and lodging expenses covered), visiting FAU and TUM and getting in contact with BGCE’s educational and research program, one of the most advanced in Europe. The main objective is to promote excellent students in CS&E and to foster international exchange at an early career stage. Eligible for the prize will be undergraduate and graduate students prior to receiving their PhD (at date of submission). Click here for application information.


This Week’s Computing Sciences Seminars

Parallel Visualization Tools and Techniques for Magnetic Vector Fields, the Magnetosphere, and Flux Transfer Events in Global Hybrid Magnetospheric Simulations
Monday, Nov. 1, 9:00–10:00 am, 50F-1647
Burlen Loring, University of California, San Diego

In this talk I will present the SciVis toolkit which our group has developed to extend ParaView with highly specialized parallel analysis and visualization algorithms tailored specifically for magnetospheric sciences. I will briefly cover H3D, our global hybrid magnetospheric simulation code that has successfully ran on 99k cores on Kraken, and discuss some challenges in global magnetospheric simulations and open areas of research in magnetospheric physics. Understanding flux transfer events (FTEs) and they role that they play in the transfer of energy and momentum from the solar wind into the Earth’s magnetosphere is one topic of active research in magnetospheric physics. I will discuss a vector field topology mapping technique that we have developed to visualize the magnetosphere and have used as the basis for a FTE detection and tracking algorithm. I will present our research into the association of flow vortices with FTE formation and evolution. A number of visualizations of the magnetosphere and FTEs will be presented.

The Missing Link: Multi-Domain Cloud Computing with Multi-Layer Networks
Monday, Nov. 1, 1:30–2:30 pm, 50F-1647
Jeff Chase, Duke University

Cloud infrastructure services manage a shared pool of servers as a unified hosting substrate for diverse applications, using various technologies to virtualize servers and orchestrate their operation. At the same time, high-speed networks increasingly offer dynamic provisioning services at multiple layers. Clouds offer a general, flexible, and powerful model to scale up computing power, and multi-layer networks enable direct high-speed interconnections among cloud applications running at multiple cloud sites.

This talk presents a new project to build support for deeply networked multi-domain cloud applications that incorporate resources from multiple infrastructure providers. The vision is to enable cloud applications to request virtual servers at multiple points in the network, together with a secure private network of dynamic circuits to interconnect them and link them with other services and data repositories. The control framework is the Open Resource Control Architecture (ORCA), an extensible platform for dynamic leasing of resources in a shared networked infrastructure. We have conducted initial experiments demonstrating coordinated allocation from multiple Eucalyptus cloud sites and dynamic circuit networks including RENCI’s Breakable Experimental Network (BEN), a multi-layer regional optical network testbed.

DRESS Code for the Storage Cloud
Monday, Nov. 1, 1:30–2:30 pm, 400 Cory Hall (Hughes Room), UC Berkeley
Salim El Rouayheb, UC Berkeley

“Fifteen young ladies in a school walk out three abreast for seven days in succession; it is required to arrange them daily, so that no two will walk twice abreast.” Thomas Kirkman posed this problem in 1850 in The Lady’s and Gentleman’s Diary, thereby starting the area of combinatorial design theory. In this talk, we will show how solving Kirkman’s schoolgirls problem, and other problems that are alike, is at the heart of constructing capacity-achieving, low-latency and bandwidth-efficient codes for distributed cloud storage. We call these codes Distributed REplication-based Simple Storage (DRESS) codes.

DRESS codes have linear encoding and decoding complexity and permit fast regeneration of data lost due to failures with no coding operations and minimal bandwidth overhead. The design of optimal DRESS codes translates into interesting combinatorial problems with many open questions. We present optimal code constructions based on projective planes and Steiner systems. For large-scale systems, we propose randomized code constructions using a balls-and-bins approach. When the security in the cloud is breached and some nodes start acting maliciously, our codes can not only guarantee data integrity, but also help catch the bad guys. (Joint work with Sameer Pawar and Kannan Ramchandran.)

Correspondence of Computational and Statistical Physics Thresholds
Monday, Nov. 1, 4:00–5:00 pm, Soda Hall, Wozniak Lounge, UC Berkeley
Allan Sly, Microsoft Research, Redmond

I'll discuss the problem of approximately counting independent sets in sparse graphs and its relationship to the hardcore model from statistical physics. The hardcore model is the probability distribution over independent sets I of a graph weighted proportionally to \lambda^{|I|} with parameter 0.

Molecular Foundry Seminar: Quantum-Mechanics-Based Design Concepts for Improved Materials for Energy Applications
Tuesday, Nov. 2, 1:30 pm, Bldg. 66 Auditorium
Emily Carter, Departments of Mechanical and Aerospace Engineering and Applied and Computational Mathematics, Princeton University

If we are to survive as a species on this planet, we must make major science and engineering breakthroughs in the way we harvest, store, and use energy. Many of the advances must come in the areas of materials science and the physics and chemistry of materials. This talk begins with an overview of my own research efforts in this direction, which entail: optimizing materials to improve efficiency of jet turbine engines used for power generation, characterizing combustion of biofuels and tritium incorporation in fusion reactor walls, optimizing mechanical properties of lightweight metal alloys for use in vehicles to improve their fuel efficiency, optimizing ion and electron transport in solid oxide fuel cell cathodes, and designing novel materials from abundant elements for photovoltaics and photocatalysts to convert sunlight into electricity and fuels. Fast and accurate quantum mechanics methods developed in my group that enable the treatment of large biofuel molecules and the mesoscale defects in metals that control mechanical properties will be briefly discussed. The remainder of the talk will provide examples of key metrics we calculate to help design efficient new materials for photovoltaics, photocatalysts, and solid oxide fuel cells. These metrics are already pointing us toward which dopants or alloys are likely to provide the most efficient energy conversion materials.

Scal: Non-Linearizable Computing Breaks the Scalability Barrier
Tuesday, Nov. 2, 4:00–5:00 pm, 540 Cory Hall, UC Berkeley
Christoph Kirsch, University of Salzburg

We propose a relaxed version of linearizability and a set of load balancing algorithms for trading off adherence to concurrent data structure semantics and scalability. We consider data structures that store elements in a given order such as stacks and queues. Intuitively, a concurrent stack, for example, is linearizable if the effect of push and pop operations on the stack always occurs instantaneously. A linearizable stack guarantees that pop operations return the youngest stack elements first, i.e., the elements in the reverse order in which the operations that pushed them onto the stack took effect. Linearizability allows to reorder concurrent (but not sequential) operations arbitrarily. We relax linearizability to k-linearizability with k > 0 to also allow sequences of up to k - 1 sequential operations to be reordered arbitrarily and thus execute concurrently. With a k-linearizable stack, for example, a pop operation may not return the youngest but the k-th youngest element on the stack. It turns out that k-linearizability may be tolerated by concurrent applications such as process schedulers and web servers that already use it implicitly. Moreover, k-linearizability does provide positive scalability in some cases because more operations may be executed concurrently but may still be too restrictive under high contention. We therefore propose a set of load balancing algorithms, which significantly improve scalability by approximating k-linearizability probabilistically. We introduce Scal, an open-source framework for implementing k-linearizable approximations of concurrent data structures, and show in multiple benchmarks that Scal provides positive scalability for concurrent data structures that typically do not scale under high contention.

Two-Stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections
Wednesday, Nov. 3, 10:30–11:00 am, 50F-1647
Patrick Oesterling, University of Leipzig, Germany

During the last decades, electronic textual information has become the world’s largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter.

In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show why traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2D) and three dimensional (3D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.

Drawing Contour Trees in the Plane
Wednesday, Nov. 3, 11:00–11:30 am, 50F-1647
Christian Heine, University of Leipzig, Germany

The contour tree compactly describes scalar field topology. From the viewpoint of graph drawing, it is a tree with attributes at vertices and optionally on edges. Standard tree drawing algorithms emphasize structural properties of the tree and neglect the attributes. Applying known techniques to convey this information proves hard and sometimes even impossible. We present several adaptations of popular graph drawing approaches to the problem of contour tree drawing and evaluate them. We identify five aesthetic criteria for drawing contour trees and present a novel algorithm for drawing contour trees in the plane that satisfies four of these criteria. Our implementation is fast and effective for contour tree sizes usually used in interactive systems (around 100 branches) and also produces readable pictures for larger trees, as is shown for a ~800 branch example.

LAPACK Seminar: Quantum Walks and Linear Algebra
Wednesday, Nov. 3, 11:10 am–12 pm, 380 Soda Hall, UC Berkeley
F. Alberto Grunbaum, UC Berkeley

I will describe from scratch what people call a quantum walk, and indicate how the spectral theory of the appropriate unitary operator can be helpful in answering some questions about recurrence and localization for these walks. I will try to show how these connections give rise to (possibly) new questions in linear algebra. I welcome a lot of feedback from the audience.

Formal Methods for Dependable Computing: From Models, through Software, to Circuits
Wednesday, Nov. 3, 12:00–1:00 pm, Sutardja Dai Hall, Banatao Aud., UC Berkeley
Sanjit A. Seshia, UC Berkeley

Computing has become ubiquitous and indispensable: it is embedded all around us, in cell phones, automobiles, medical devices, and much more. This ubiquity brings with it a growing challenge to ensure that our computing infrastructure is also dependable and secure. We need to develop and maintain complex software systems on top of increasingly unreliable computing substrates under stringent resource constraints such as energy usage.

In this talk, I will discuss how formal methods — mathematical techniques for specification, design, and verification of systems — can help improve system dependability. As illustrative case studies, I will present work we have done on verifying timing properties of control software (relevant for cyber-physical systems, such as automobiles), analyzing the impact of faulty devices in digital circuits, and designing electronic voting machines so as to enable verification of desired properties. The talk will also touch upon some of the core underlying technology, such as satisfiability modulo theories (SMT) solvers, and describe challenges for the future.

From Point to Pixel: Digitising Colour, Texture, Shape and Size
Wednesday, Nov. 3, 12:00–1:00 pm, 2251 College Ave. (Archaeological Research Facility), Room 101
Adam Spring, University of Exeter, Cornwall, UK

The advent of image and laser based systems have increased the profile of photometric and geometric (colour, texture, shape and size) data capture, particularly in the 21st century. Whilst the application of techniques like laser scanning and photogrammetry to cultural heritage are well documented, their relationship to each other and how they may be used holistically to answer questions within archaeology is not. In my presentation I explore the consequences of going digital with well established and developing technologies, how they are interconnected to one another and how they may be combined or understood in a way that provides a best fit solution to the artefacts and sites that they are applied to.

Supercomputing and Financial Markets
Friday, Nov. 5, 11:00 am–12:00 pm, 50B-4205
David Leinweber, CRD, Center for Innovative Financial Technology

Modern financial markets are complex, interconnected information machines. Data accumulates at hundreds of gigabytes per day for the US stock market alone. The May 6 “Flash Crash” demonstrated that modern electronic markets are capable of unexpected, and dangerous, behavior. Imagine the economic consequences if US stocks crashed to penny levels often, for reasons no one understood. Federal regulators were overwhelmed with the task of analyzing the Flash Crash. Data collection is incomplete, and woefully inadequate, with one-second resolution in a market whose participants move in microseconds. Tools for visualizing, analyzing and simulating markets that move in microseconds were non-existent. The report on the May 6 crash is dated September 30.

The SEC is fully aware of this, and has proposed spending $4 billion to build a real-time monitoring system, and $2 billion annually to operate it. The real-time requirement is debatable, but the need to see what ever-faster wired markets are doing is not. Supercomputing has a great deal to offer in this regard. This talk discusses the issues, tasks, and technologies needed to understand modern financial markets and support regulators in their efforts to promote free, stable, reliable and fair financial markets.


Link of the Week: How Significant Is Computing?

Is computing more important than eating? Apparently readers of The Economist think so—or at least that subset of readers who participate in online opinion polls. A just-completed online debate considered the proposition “This house believes the development of computing was the most significant technological advance of the 20th century,” and 74% of participants agreed.

The opposition argued that the Haber-Bosch synthesis of ammonia for fertilizer and the introduction of high-yielding crop varieties that could take advantage of that abundant nitrogen supply created the well-nourished societies that could invent and take advantage of computers; but only 26% of voters agreed.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.