InTheLoop | 02.07.2011
February 7, 2011
Simulations Shed Light on Fate of Sequestered CO2
Researchers suspect that underground, or geologic, carbon sequestration will be key tool in reducing atmospheric CO2. To investigate this idea further, Berkeley Lab’s George Pau took advantage of the massively parallel computing capacity of NERSC to create the first-ever three-dimensional simulations exploring how sequestered CO2 and saline aquifers interact. Unprecedented in detail, these simulations—run in both 2-D and 3-D—will help scientists better predict the success of this kind of sequestration project. Read more. This article was also the subject of an HPCwire editorial.
Cray XE6 Training Starts Today at NERSC
NERSC, Los Alamos National Laboratory, and Cray, Inc. are presenting a training class today and tomorrow (February 7–8) focused on the new Hopper Cray XE6 system. The training is being held at the Oakland Scientific Facility and broadcast as a webinar. Participation is not restricted to NERSC users.
Day 1 is an introduction to using modern supercomputers with an emphasis on the Cray XE6, including an introduction to thread programming with OpenMP. Day 2 will present intermediate instruction on effective and efficient use of the Cray XE6. View the agenda here. If there is sufficient demand, the training will be repeated.
ESnet Upgrades Network Performance Knowledge Base
ESnet’s performance knowledge base, fasterdata.es.net, has been updated and reorganized. This site contains over 85 pages of information and advice, and gets over 3000 hits per week from all over the world. It is used by folks in all industries and R&E to improve their network performance and troubleshoot problems. Read more.
Patterson Sees RISC Comeback in Post-PC Eras
In a two-part blog post, “RISC versus CISC Wars in the PostPC Eras: Part 1, Part 2,” UC Berkeley professor and CRD researcher David Patterson argues that reduced instruction set computing (RISC)—which he pioneered in 1980—is making a comeback. He comments:
The importance of maintaining the sequential programming model combined with the increasingly abundant number of transistors from Moore’s Law led, in my view, to wretched excess in computer design. Measured by performance per transistor or by performance per watt, the designs of the late 1990s and early 2000s were some of the least efficient microprocessors ever built. This lavishness was acceptable for PCs, where binary compatibility was paramount and cost and battery life were less important, but performance was delivered more by brute force than by elegance.
However, RISC now dominates in personal mobile devices, and Patterson thinks that with some improvements it could make inroads even in the server and HPC markets.
This Week’s Computing Sciences Seminars
Par Lab Seminar Series: Exploiting Parallelism in Real-Time Music and Audio Applications
Tuesday, February 8, 11:00 am–12:30 pm, 438 Soda Hall (Wozniak Lounge), UC Berkeley
Amar Chaudhary, Center for New Music and Audio Technologies
Open Sound World (OSW) is a scalable, extensible programming environment that allows musicians, sound designers and researchers to process sound in response to expressive real-time control. This talk will provide an overview of OSW, past development and future directions, and then focus on the parallel processing architecture. Early in the development of OSW in late 1999 and early 2000, we made a conscious decision to support parallel processing as affordable multiprocessor systems were coming on the market. We implemented a simple scalable dynamic system in which workers take on tasks called “activation expressions” on a first-come, first-served basis, which facilities for ordering and prioritization to deal with real-time constraints and synchronicity of audio streams. In this presentation, we will review a simple musical example and demonstrate performance benefits and limitations of scaling to small multi-core systems. The talk will conclude with a discussion of how current research directions in parallel computing can be applied to this system to solve past challenges and scale to much larger systems.
Cities and Computers: Their Architecture
Wednesday, February 9, 12:00-1:00 pm, 250 Sutardja Dai Hall, UC Berkeley
Live broadcast at mms://media.citris.berkeley.edu/webcast
Forrest Warthman, Warthman Associates
City architecture and computer architecture have many similarities in their form and function — how they are physically built, how their parts interconnect, and how these interconnected parts operate.
Some of these functional similarities are evident from shared words: “gate,” “port,” “pipeline,” and “cache” describe aspects of computer architecture, but the concepts and their first implementation originated in old cities. Other functional similarities are not readily evident. Although we are adept at navigating through cities and using their resources, our understanding of cities is primarily intuitive, unlike our understanding of computer technology which is gained through disciplined study.
Cities can be viewed as machines — apparatuses that use power and have several interrelated parts, each with a definite function. But instead of performing just one task, as most machines do, cities perform thousands of tasks. In this sense, cities are the biggest and most complex machines on the planet. They are, among other things, the breeding ground for computer technology.
The similarities between cities and computers can provide insights for designers of complex structures in both fields. For example, city architects and engineers who design manufacturing production lines or international airports can exchange useful insights with computer architects who design microprocessors and data-transfer methods.
One useful way to understand similarities is through analogies that compare functions in cities with those in computers. This presentation is a sequence of analogies in the form of pictures and diagrams.
Statistical Methods for Combining Measurements and Models, with Application to Mapping Particulate Matter
Friday, February 11, 12:00-1:00 pm, 90-3122
Modern statistical methods, in particular Bayesian hierarchical models, provide a framework for combining various types of measurements in a single analysis. I’ll describe a basic latent variable framework for dealing with spatial and spatio-temporal data. The approach is to represent the spatial and spatio-temporal field of interest as a latent field and relate observations to that field. An observation may represent a single point in space and time or an average over space and time. Then I’ll describe how to use the approach to combine measurements with proxies such as computer code (model) output and remote sensing output. A critical aspect in many applications is accounting for systematic discrepancy between the proxy and the latent, unknown true field. I’ll present a case study of modeling ambient particulate matter in the eastern U.S. Finally, I’ll briefly discuss other methods in the statistical literature for combining measurements and model output and accounting for discrepancy.
Link of the Week: Desktop Archeology
A new archeological data archive and Web-based collaborative learning tool is providing researchers, educators, filmmakers, and digital heritage enthusiasts with the most complete data to date of any ancient Maya site, according to an article in Scientific Computing. The archive centerpiece is a priceless 3-D record of the ancient Maya city of Chichen Itza, recently voted one of the “New Seven Wonders of the World” by an eponymous Swiss-based foundation.
Computer visualization researchers at the Institute for the Study and Integration of Graphical Heritage Techniques (INSIGHT) generated the open source archeological data archive onsite in the Yucatan, through the use of 3-D laser scanning equipment and digital photographs. The INSIGHT team then synthesized raw data into 3-D computer models of the site today; the team also visualized the site as it may have looked in the past, paying meticulous attention to texture and light. Their data and tools are now available online at www.mayaskies.net.
Mayaskies.net builds on research completed for Chabot Space and Science Center’s Tales of the Maya Skies, a full-dome planetarium production showcasing the astronomical achievements of the ancient Maya.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 7,000-plus scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are Department of Energy Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.