A-Z Index | Phone Book | Careers

InTheLoop | 01.17.2012

January 1, 2012

Closest-Ever Look at a Type Ia Supernova Explosion

Even as the “supernova of a generation” came into view in backyards across the northern hemisphere last August, physicists and astronomers who had caught its earliest moments (including Peter Nugent of CRD’s Computational Cosmology Center) were developing a much clearer picture of what happens during a titanic Type Ia explosion. Now they have announced the closest, most detailed look ever at one of the universe’s brightest celestial mileposts. Read more.


A Precision Model of the Cosmos

Using NERSC systems, Berkeley Lab scientists and their Sloan Digital Sky Survey colleagues have produced the biggest 3D color map of the universe ever. The team also achieved the most accurate calculation yet of how matter clumps together—from a time when the universe was only half its present age until now. Read more.


RECESS at Chabot with David Bailey This Friday Evening

The Chabot Space and Science Center’s monthly NightSchool event returns on January 20 with “RECESS!” Join the Computational Research Division’s Chief Technologist David Bailey in conversations about the latest developments in the theory of pi; play dodgeball with Cal Bears mascot “Oski”; build and race Lego cars; join a hula hoop contest; and then lean back for a live planetarium show. RECESS is going to be fun, especially with adult beverages, observatory deck telescope viewing (weather permitting), and the Exquisite Corpse storytelling weave. For more information and to purchase tickets, go here.


NCWIT Resources for Technical Women and Underrepresented Minorities

The National Center for Women in Technology (NCWIT) Workforce Alliance has created four resources for technical women. While these resources are geared towards women, you can easily substitute other underrepresented minorities, and the tips would still apply.

 


This Week’s Computing Sciences Seminars

Mint Programming Model for Massively Parallel Architectures
Tuesday, January 17, 10:00–11:00 am, 70A-3377
Didem Unat, University of California, San Diego

Data parallel scientific applications are attractive candidates for acceleration via massively parallel architectures like GPUs. A main challenge, however, is to adopt an unfamiliar programming style, and understand the subtleties of the architectures that are required to master performance programming. In order to make the accelerator technology more accessible to the broader community, we designed the Mint programming model, and implemented an accompanying compiler. Mint allows the programmer to express parallelism at a high level. Its compiler parallelizes loop-nests, performs data locality optimizations, and relieves the programmer of a variety of tedious tasks such as managing threads.

The Mint translator generates highly optimized CUDA C from annotated C source. Using just five different Mint pragmas, a non expert CUDA programmer can benefit from GPU technology without having to be an expert. We implemented a domain-specific optimizer that targets 3D stencil methods. The optimizer detects stencil structure in the computation and performs automatic optimizations accordingly. On a set of commonly used 3D stencil methods as well as real-world applications, Mint can achieve nearly 80% of the performance achieved on the GPUs using hand-written CUDA.

A Fast Direct Solver for Non-Oscillatory Integral Equations
Wednesday, January 18, 10:00–11:00 am, 50F-1647
Kenneth Ho, Courant Institute of Mathematical Sciences, New York University

Over the past two decades, integral equation methods have become very successful at the numerical solution of many important differential equations. This has been due in large part to the development of algorithms to rapidly apply certain Green’s function matrices, which, when coupled with Krylov subspace methods, have enabled fast iterative solvers with optimal or near-optimal complexities. Although such techniques have transformed many aspects of computational science and engineering, they remain somewhat ineffective in the face of ill-conditioning or the need to solve many related systems. Here, we discuss a fast direct solver for non-oscillatory integral equations that overcomes these deficiencies. The algorithm is based on a new form of multilevel matrix compression extending that underlying classical fast matrix-vector product schemes, and has some connection with sparse direct solvers. Time permitting, we will also touch on similar investigations for oscillatory integral equations, for which much remains to be discovered. This is joint work with Leslie Greengard.

Computational Science and Engineering (CSE) Support for HECToR: UK’s National Supercomputer
Wednesday, January 18, 10:00–11:00 am, OSF 943-238
Ian Bush and Philip Ridley, NAG Ltd.

HECToR is short for High-End Computing Terascale Resource. The HECToR facility is centred around the provision of world-class, high-end computing resource for use in a wide range of disciplines. The facility is run by a consortium of organisations, commercial and academic, known as The HECToR Partners. In this talk we will outline the current status of HECToR, and the role that NAG Ltd. plays in providing computational science and engineering support for the users of the service.

The Supple Grid: Adapting to Variable and Distributed Resources
Wednesday, January 18, 4:00–5:00 pm, 306 Soda Hall, UC Berkeley
Alexandra von Meier, Co-Director, Electric Grid Research Program at the California Institute for Energy and Environment

The integration of intermittent and distributed renewable resources in the electric grid represents a coordination problem in space and time at increasingly higher resolution, presenting a significant set of challenges to the legacy infrastructure. A key response to these challenges will be the integration of information and control technologies at or near the periphery of the grid, including demand response as well as new approaches to power quality management and protection systems. By comparing the most important limitations of the legacy grid with the implications of an intelligent periphery, this talk will discuss possible implications for the evolution of power systems in the context of decarbonized electricity.

Multiscale Analysis of Solid Materials: From Electronic Structure Models to Continuum Theories
Wednesday, January 18, 4:10–5:00 pm, 939 Evans Hall, UC Berkeley
Jianfeng Lu, Courant Institute of Mathematical Sciences, New York University

Modern material sciences focus on studies on the microscopic scale. This calls for mathematical understanding of electronic structure and atomistic models, and also their connections to continuum theories. In this talk, we will discuss some recent works where we develop and generalize ideas and tools from mathematical analysis of continuum theories to these microscopic models. We will focus on macroscopic limit and microstructure pattern formation of electronic structure models.

Analyzing Scale Interactions in Compressible Turbulent Flows by Coarse-Graining
Thursday, January 19, 10:00–11:00 am, 70A-3377
Hussein Aluie, Center for Non-Linear Studies, Los Alamos National Laboratory

We utilize a coarse-graining framework, rooted in a commonly used technique in the subjects of PDEs and Large Eddy Simulation modeling, to analyze nonlinear scale interactions in flow fields. The approach is powerful and very general, allows for probing the dynamics simultaneously in scale and in space, and is not restricted by the usual assumptions of homogeneity or isotropy. The method allows for quantifying the coupling that exists between different scales through exact mathematical analysis and numerical simulations, and may be used to extract certain scale-invariant universal features in the dynamics.

We apply these multi-scale analysis tools to study compressible turbulence, where we prove that inter-scale transfer of kinetic energy is dominated by local interactions. In particular, our results preclude direct transfer of kinetic energy from large-scales to dissipation scales, such as into shocks, in high Reynolds number turbulence as is commonly believed. The assumptions used in our proofs on the scaling of structure functions are weak and enjoy compelling empirical support. Under a stronger assumption on pressure dilatation co-spectrum, we show that mean kinetic and internal energy budgets statistically decouple beyond a transitional conversion range. We present supporting evidence from high-resolution $10243$ numerical simulations. Our analysis establishes the existence of an ensuing inertial range over which mean kinetic energy cascades locally and in a conservative fashion despite not being an invariant of the dynamics.

TRUST Security Seminar: Providing Security with Insecure Systems
Thursday, January 19, 1:00–2:00 pm, Soda Hall, Wozniak Lounge, UC Berkeley
Andrew Odlyzko, University of Minnesota

Network security is terrible, and we are constantly threatened with the prospect of imminent doom. Yet such warnings have been common for the last two decades. In spite of that, the situation has not gotten any better. On the other hand, there have not been any great disasters either. To understand this paradox, we need to consider not just the technology, but also the economics, sociology, and psychology of security. Any technology that requires care from millions of people, most very unsophisticated in technical issues, will be limited in its effectiveness by what those people are willing and able to do. This suggests that one can provide adequate security using contrarian approaches that violate traditional security and system engineering precepts (such as encouraging “spaghetti code”).

Reduced-Order Models of Complex Flows: Modeling, Analysis and Computations
Friday, January 20, 10:00–11:00 am, 50F-1647
Zhu Wang, Department of Mathematics, Virginia Tech

Reduced-order models are frequently used in the simulation of complex flows to overcome the high computational cost of direct numerical simulations, especially for three-dimensional nonlinear problems. Proper orthogonal decomposition is one of the most commonly used methods to generate reduced-order models for turbulent flows dominated by coherent structures. To balance the low computational cost required by a reduced-order model and the complexity of the targeted turbulent flows, appropriate closure modeling strategies need to be employed. In this talk, we will present several new nonlinear closure methods for proper orthogonal decomposition reduced-order models. We will also present numerical results for the new models used in realistic applications such as uncertainty quantification in nuclear engineering, energy efficient building design and control, and climate modeling.

The Swarm at the Edge of the Cloud
Friday, January 20, 1:00–2:00 pm, 521 Cory Hall (Hogan Room), UC Berkeley
Jan Rabaey, Donald O. Pederson Distinguished Professor, EECS/UCB

Mobile devices such as laptops, netbooks, tablets, smart phones and game consoles have become our de facto interface to the vast amount of information delivery and processing capabilities of the cloud. The move to mobility has been enabled by the dual forces of ubiquitous wireless connectivity combined with the increasing energy efficiency offered by Moore’s law. Yet, a major component of the mobile remains largely untapped: the capability to interact with the world immediately around us. A third layer of information acquisition and processing devices—commonly called the sensory swarm—is emerging, enabled by even more pervasive wireless networking and the introduction of novel ultra-low power technologies. This gives rise to the true emergence of concepts such as cyber-physical and bio-cyber systems, immersive computing, and augmented reality. The functionality of the swarm arises from connections of devices, leading to a convergence between Moore’s and Metcalfe’s laws, in which scaling refers not any longer to the number of transistors per chip, but rather to the number of interconnected devices.

Enabling this fascinating paradigm—which represents true wireless ubiquity—still requires major breakthroughs on a number of fronts. Providing the always-connected abstraction and the reliability needed for many of the intended applications requires a careful balancing of resources that are in high demand: spectrum and energy. The presentation will analyze those challenges, and propose some disruptive solutions that engage the complete stack—from device to system.

 



About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.