InTheLoop | 10.22.2012
October 22, 2012
Modeling Feat at NERSC Sheds Light on Protein Channel’s Function
Chemists at the California Institute of Technology (Caltech) have managed, for the first time, to simulate the biological function of a channel called the Sec translocon, which allows specific proteins to pass through membranes. The feat required bridging timescales from the realm of nanoseconds all the way up to full minutes, exceeding the scope of earlier simulation efforts by more than six orders of magnitude. The result is a detailed molecular understanding of how the translocon works.
The new computational model and the findings based on its results are described by Thomas Miller, an assistant professor of chemistry at Caltech, and graduate student Bin Zhang in the current issue of the journal Cell Reports. Miller and Zhang are both NERSC users; computational resources for this study were provided by NERSC and others. Read more.
In the News: Andy Nonaka Helps Bring the Universe into Full Focus
An article on visualization in Symmetry, a joint Fermilab/SLAC magazine, reports that the work of Andy Nonaka, an applied mathematician in Berkeley Lab’s Center for Computational Sciences and Engineering, is helping provide a better understanding of the physics of Type 1a supernovae. The article also cites the role of supercomputers like NERSC’s Hopper system in providing the simulation data used in visualizations. Read more.
Deadline Tomorrow: Show Off Your Work at SC12 Keynote and Plenary Sessions
The keynote and plenary sessions at SC12 will feature an SC-sponsored “Walk-in Walk-out Video Loop” which will run before and after these sessions as attendees arrive and depart. This is a unique, highly visible platform to showcase HPC projects that can be shared via video or images. Organizers are particularly interested in highlighting any unique scientific visualizations.
- Content will be projected on a high-definition surface that is 32 by 18 ft with a 16 x 9 ratio. Submissions should be at least 1920 x 1080 pixel resolution at 30 frames per second.
- Files should be submitted in a standard Quicktime or Windows codec format.
- Please include a separate file with your organization’s logo, along with the names and photos of people involved in the project (if available).
- Participation is FREE!
- Upload files to http://submissions.supercomputing.org using the “Walk-in, Walk-out video” submission form.
- Please clearly label your files with your institution name.
Submissions are due by 11:59 am Eastern time tomorrow, October 23. If you have any questions please email: email@example.com.
This Week’s Computing Sciences Seminars
Reduction to Bidiagonal Form—New Algorithms and Issues
Wednesday, October 24, 2:00–3:00 pm, 50F-1647
Jesse Barlow, The Pennsylvania State University
Since Golub and Kahan’s 1965 landmark paper, bidiagonal reduction has been an important first step in computing the singular value decomposition. The algorithm also plays an important role in the solution of ill-posed least squares problems and the computation of matrix functions. Two algorithms for bidiagaonal reduction were presented in that original paper—a Householder based reduction usually used for dense matrices and a Lanczos based reduction that is traditionally used to extract a subset of singular values and vectors from a large, sparse or structured matrix. Variants to the Householder based reduction have been proposed by Lawson and Hansen, Chan, and Trefethen and Bau.
The Lanczos-based reduction has always suffered from the same problem as the Lanczos algorithm for symmetric matrices—loss of orthogonality in the reduction resulting in simple singular values converging as clusters. Simon and Zha proposed a version of the algorithm that reorthogonalizes just one of the two sets of Lanczos vectors, the speaker along with Bosner and Drmač developed a hybrid algorithm that produces one set of Lanczos vectors using Householder transformations, the other using the Lanczos recurrence. In this talk, a framework based on an observation by Charles Sheffield and recent work by Paige, is used to understand both the Simon and Zha and Barlow, Bosner, and Drmač variants of the Golub-Kahan-Lanczos (GKL) bidiagonal reduction. It is shown that if good orthogonality is maintained in just one set of Lanczos vectors, the algorithm retains a number of desirable properties. These include good orthogonality in leading left and right singular vectors, and that the algorithm acts exactly as the GKL algorithm in exact arithmetic on a nearby matrix. At the end of the talk, issues involving generalization to a block GKL algorithm and implementation are discussed.
LineAO—Improved Three-Dimensional Line Rendering
Thursday, October 25, 11:00 am–12:00 pm, 50F-1647
Sebastian Eichelbaum, Universität Leipzig, Germany
Rendering large numbers of dense line bundles in three dimensions is a common need for many visualization techniques, including streamlines and fiber tractography. Unfortunately, depiction of spatial relations inside these line bundles is often difficult but critical for understanding the represented structures. Many approaches evolved for solving this problem by providing special illumination models or tube-like renderings. Although these methods improve spatial perception of individual lines or related sets of lines, they do not solve the problem for complex spatial relations between dense bundles of lines. In this paper, we present a novel approach that improves spatial and structural perception of line renderings by providing a novel ambient occlusion approach suited for line rendering in real time.
Memory Movement Optimized Multigrid Algorithms
Thursday, October 25, 3:00–4:00 pm, 50B-4205
Mark F. Adams, Columbia University
Current and future generations of large scale computers will be constrained by power — this will result in an epochal change in computer architectures that rivals the transition to distributed memory machines in the late 1980s. Data movement, though important since the 1980s, will be central to the cost of computing on future generations of machines because memory and data movement are responsible for most of the power budget of large scale computers. Future architectures are far from well understood but we can assume that memory movement and massive concurrency will be central to effective algorithms on these machines. With that in mind, we propose a (non)linear equation solver algorithm, that is a parallel extension of a low memory multigrid method proposed by Brandt in 1984 (segregated refinement, with log(N) memory complexity), that has the potential to radically reduce memory usage, and consequently data movement. This algorithm possesses massive concurrency, is amenable to data or task driven programming models, enforces good data locality, is amenable to loop fusion and is generally attractive in memory centric computer cost models. We describe the algorithm, techniques to implement the method and show numerical results for a simple model problem to verify the correctness of the method.
Link of the Week: Higgs Boson Makes Beautiful Music
This summer’s discovery of the Higgs boson — or, more accurately, a “Higgs-like particle” — definitely inspired physicists, who waited decades for that momentous occasion. But it’s also inspiring musicians to create ethereal-sounding music. The process is called sonification: taking raw data and transforming it into sound while still retaining the information contained therein. Discovery News provides the Higgs music and links to other examples of scientific sonification in this article.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.