A-Z Index | Phone Book | Careers

InTheLoop | 09.10.2012

September 10, 2012

NERSC Postdoc Creates a Way to Share Burgeoning X-Ray Data

Researchers, who naturally want to publish their results first, don’t often share their data. But unexpected discoveries may lurk in the tidal wave of information flooding in from fields like x-ray imaging. NERSC’s Filipe Maia has created the Coherent X-ray Imaging Data Bank (CXIDB), which premiered in February with x-ray diffraction images of mimivirus, one of the world’s largest viruses, made by an international team working at SLAC’s Linac Coherent Light Source. Maia’s colleague Stefano Marchesini of the ALS says the pioneering initiative, described in the current issue of Nature Methods, “will usher in a new era of open access data.”


BER/NERSC Requirements Workshop This Week

The DOE Office of Advanced Scientific Computing Research (ASCR), the Office of Biological and Environmental Research (BER), and NERSC are sponsoring a workshop on “Large Scale Computing and Storage Requirements for Biological and Environmental Research: Target 2017” on Sept. 11–12 in Rockville, MD. Harvey Wasserman and Richard Gerber of NERSC are coordinators of the workshop.

The goal of this meeting is to elucidate computing, storage, and service requirements for research sponsored by BER at NERSC. These requirements will serve as input to NERSC and ASCR planning processes for systems and support, and will help ensure that NERSC continues to provide world-class support for scientific discovery to DOE scientists and their collaborators. The tangible outcome of the review will be a report that includes BER computing, storage, data, analysis, and support requirements along with a supporting, science-based narrative.

Berkeley Lab contributions to the workshop include:

  • Associate Lab Director Kathy Yelick: “NERSC’s Role in BER Research”
  • Bill Collins, Earth Sciences Division: “Case Study: CLIMES and IMPACTS”
  • David Goodstein, Genomics Division, and Victor Markowitz, Computational Research Division: “Case Study: Joint Genome Institute”
  • Sudip Dosanjh, NERSC: “Introduction to the New NERSC Director”

CS Staff Contribute to Extremely Large Databases Conference

XLDB-2012, the Extremely Large Databases Conference, is taking place this week (Sept. 10–13) at Stanford University, and several CRD and NERSC staff are participating. Surendra Byna will present a “lightning talk” on “Parallel Data, Analysis, and Visualization of a Trillion Particles.” Lavanya Ramakrishnan, Yushu Yao, and Shane Canon co-authored a poster on “Evaluation of NoSQL databases, SciDB and Hadoop for Scientific Data.”


Updates on CRT Project Provided at Town Hall Meeting on Wednesday

Staff who want to learn more about the Computational Research and Theory (CRT) project are invited to attend a town hall meeting on Wednesday, Sept. 12, from 1:00 to 2:00 pm in the Building 50 Auditorium. The gathering is an opportunity to learn more about construction on a steep site, and updates on the schedule and impacts will be provided. If you have questions about construction noise, this is your chance to learn more about the project and how long you can expect the noise to last. For more information, contact Henry Martinez (x6259). Go here for a live video cam of the CRT project from the top of building 50B.


This Week’s Computing Sciences Seminars

DREAM Seminar: Fast-Lipschitz Optimization
Tuesday, Sept. 11, 4:10–5:00 pm, 540 Cory Hall, UC Berkeley
Carlo Fischione, KTH Royal Institute of Technology, Sweden

In many optimization problems, decision variables must be computed by algorithms that need to be fast, simple, and robust to errors and noises, both in a centralized and in a distributed set-up. This occurs, for example, in contract based design, sensors networks, smart grids, water distribution, and vehicular networks. In this seminar, a new simple optimization theory, named Fast-Lipschitz optimization, is presented for a novel class of both convex and non-convex scalar and multi-objective optimization problems that are pervasive in the systems mentioned above. Fast-Lipschitz optimization can be applied to both centralized and distributed optimization. Fast-Lipchitz optimization solvers exhibit a low computational and communication complexity when compared to existing solution methods. In particular, compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized Fast-Lipschitz algorithms is superlinear. Distributed Fast-Lipschitz algorithms converge fast, as opposed to traditional Lagrangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings among the nodes. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Fast-Lipschitz optimization is then illustrated by distributed estimation and detection applications in wireless sensor networks.

ESnet Webinar: Globus Online for ESnet Users
Wednesday, Sept. 12, 11:00 am–12:30 pm

This webcast includes an overview of a Science DMZ, and a demonstration of how Globus Online may be used for large-scale data transfer among ESnet storage nodes.

To register for the webcast, go here.

Matching, Visualizing and Archiving Cultural Heritage Artifacts Using Multi-Channel Images
Wednesday, Sept. 12, 12:00–1:00 pm, 310 Sutardja Dai Hall, Banatao Auditorium, UC Berkeley
Corey Toler-Franklin, UC Davis

Recent advancements in low-cost acquisition technologies have made it more practical to acquire real-world datasets on a large scale. This has led to a number of computer-based solutions for reassembling, archiving and visualizing cultural heritage artifacts.

In this talk, I will show how to combine aspects of these technologies in novel ways, and introduce algorithms that improve upon their overall efficiency and robustness. First, I will introduce a 2D acquisition pipeline that generates higher resolution color and normal maps than those available with the 3D scanning devices typically used in practical settings. Next, I will incorporate these normal maps into a novel multi-cue matching system that uses machine learning to reassemble small fragments of artifacts.

I will show examples of how this system is used by archaeologists at the Akrotiri Excavation Laboratory of Wall Paintings in Santorini Greece for reconstructing the Theran Frescoes. I will then present a non-photorealistic rendering pipeline for illustrating geometrically complex objects using images with multiple channels of information. I will demonstrate how this work is used for visualizing historic artifacts from digital museum collections.

Deflations Preserving Relative Accuracy: Scientific Computing and Matrix Computations Seminar
Wednesday, Sept. 12, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
W. Kahan, Prof. Emeritus, UC Berkeley

Deflation turns a matrix eigenproblem into two of smaller dimensions by annihilating a block of off-diagonal elements. When does deflation perturb at worst the last significant digit or two of each of an Hermitian matrix’s eigenvalues no matter how widely their magnitudes spread? We seek practicable answers to this question, particularly for tridiagonals, analogous to answers for bidiagonals’ singular values found by Ren-Cang Li in 1994. How deflation affects singular vectors and eigenvectors is assessed too, as is the exploitation of spectral gaps when known.

Efficient Testing of Concurrent and Distributed Systems
Wednesday, Sept. 12, 3:00–4:00 pm, 373 Soda Hall, UC Berkeley
Pallavi Joshi, University of California, Berkeley

Many software systems today have poor reliability. In addition to losses of billions, software defects are responsible for a number of serious injuries and deaths in transportation accidents, medical treatments, and defense operations. The situation is getting worse with concurrency and distributed computing becoming integral parts of many real-world software systems. The non-determinism in concurrent and distributed systems and the unreliability of the hardware environment in which they operate can result in defects that are hard to find and understand. In this talk, I will present techniques and tools that we have developed to augment testing to enable it to quickly find and reproduce important bugs in concurrent and distributed systems. Our techniques are based on the following two key ideas: (i) use program analysis to increase coverage by predicting bugs that could have occurred in “nearby” program executions, and (ii) provide programming abstractions to enable testers to easily express their insights to guide testing without any knowledge about the underlying program analysis. The tools that we have built have found many serious bugs in large real-world software systems (e.g., Jigsaw web server, JDK, JGroups, and Hadoop File System).

A Low Mach Number Model for Moist Convection
Thursday, Sept. 13, 11:00 am–12:00 pm, 50F-1647
Warren O’Neill, Institut für Mathematik, Freie Universität Berlin, Germany

In the series of papers starting with Almgren et al. (2006), a sound-filtering low Mach number model with source terms and compositional changes for supernovae is developed. Motivated by this work we aim to create a similar low Mach number model for moist convection. To date, most sound-filtering moist convection models use the anelastic approximation which is only valid for small density variations, whereas our model correctly captures incompressible flows with large density variations and is therefore more generally applicable, as shown in Achatz et al. (2010).

To make the transition from astrophysics to meteorology, we utilize the moist convection model in Bannon (2002). Then to implement the model numerically, we are incorporating it into a finite volume code for low Mach number flow. Details of the numerics can be found in Klein (2009). The model will be verified against the benchmark test case given in Bryan and Fritsch (2002). Once our model is verified against this ideal case, we would like to develop it further by incorporating precipitation, more complicated microphysics, and thermodynamics, using LES to model the turbulence of deep precipitation and of the convective boundary layers, and also developing the code to model squall lines.


Link of the Week: Are Computers Electronic Cocaine?
“The computer is electronic cocaine for many people,” says Peter Whybrow, director of UCLA’s Institute for Neuroscience and Human Behavior. “Our brains are wired for finding immediate reward. With technology, novelty is the reward. You essentially become addicted to novelty.”

With many of the usual constraints that used to prevent people from doing things 24 hours a day—like distance and darkness—falling away, our fast new lives have some of the symptoms of clinical mania: excitement over acquiring new things, high productivity, fast speech—followed by sleep loss, irritability, and depression. Whybrow believes the physiological consequences of this modern mania are dramatic, contributing to epidemic rates of obesity, anxiety, and depression. Read more.



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.