# InTheLoop | 01.23.2012

January 23, 2012

## ESnet/Internet2 Joint Techs Conference This Week

The Winter 2012 Joint Techs conference, the premier international conference of network engineers, will focus on software-defined networking (SDN), cloud computing, emerging technologies, and infrastructure support for data-intensive science. The meeting, co-sponsored by the ESnet Site Coordinating Committee (ESCC) and Internet2, is being held Sunday, Jan. 22, through Thursday, Jan. 26, in Baton Rouge, Louisiana, hosted by the Louisiana Optical Network Initiative (LONI).

**Mike McPherson** of the University of Virginia and **Mike Bennett** of Lawrence Berkeley National Laboratory will lead the discussions on cloud computing and emerging technologies on Monday, Jan. 23. **Steve Corbato** of the University of Utah and **Eli Dart **from ESnet will chair the sessions on infrastructure support for data-intensive science on Tuesday, Jan. 24. **Matt Davy** from Indiana University and **Jason Lee** from NERSC will lead the software-defined networking meetings on Wednesday, Jan. 25.

Other Berkeley Lab contributions include:

**Eli Dart**and**Brian Tierney:**Achieving a Science “DMZ”**Chris Tracy:**100G Deployment — Challenges & Lessons Learned from the ANI Prototype and SC11**Chin Guok, Evangelos Chaniotakis,**and**Eric Pouyoul**(contributors): ARCHSTONE: Intelligent Network Services for Advanced Application Workflows**Chin Guok:**Evolution of OSCARS**Gary Jung:**Bootstrapping Institutional Capability**Eli Dart**and**Brent Draney:**National Laboratory Success Stories**Gregory Bell:**ESnet Update

In addition, tutorials will be held on constructing an OpenFlow network, installing equipment in a Colocation/Telco facility, and building a successful infrastructure to support data intensive science. On January 25 and 26, following the Joint Techs meeting, the ESCC will holds its semi-annual meeting of its members, providing a technical forum that helps establish community practices to ensure the smooth operation of the ESnet-connected sites. A network performance workshop, Global Lambda Integrated Facility (GLIF) meeting, and NetGurus sessions will be colocated with Joint Techs during the week of the conference.

## NERSC Course: Object-Oriented Programming in Fortran 2003

Registration is open for a three-day course sponsored by NERSC entitled “Object-Oriented Programming in Fortran 2003” to be held on the UC Berkeley campus March 26–28, 2012. The course is being taught by **Damian Rouson** and **Karla Morris** from Sandia National Laboratories.

Registration is required, but there is no charge for the event. The course is limited to the first 30 to sign up here. This is a local event only; no remote broadcast is planned.

Fortran 2003 explicitly supports object-oriented programming (OOP). OOP aims to increase a program’s maintainability, in part by reducing cross-module data dependencies, and to increase a program’s reusability, in part by providing for extensible derived types. Emerging compiler support for Fortran 2003 inspires a more modern program design and implementation style. This course provides the requisite skills. Day 1 introduces OOP in Fortran 2003. Day 2 introduces patterns of best practice in program organization. Day 3 explores several paths toward parallel OOP. Examples will utilize introductory-level numerical algorithms from linear algebra and differential equations inspired by multiphysics modeling, that is, coupled field problems common to many interdisciplinary, engineering, and physical science simulations.

## This Week’s Computing Sciences Seminars

**Z-numbers—A New Direction in the Analysis of Uncertain and Imprecise Systems**

Monday, January 23, 3:30-5:00 pm, 3108 Etcheverry Hall, UC Berkeley**Lotfi A. Zadeh,** Director, Berkeley Initiative in Soft Computing (BISC)

Decisions are based on information. To be useful, information must be reliable. Basically, the concept of a Z-number relates to the issue of reliability of information. A Z-number, Z, has two components, Z=(A,B). The first component, A, is a restriction (constraint) on the values which a real-valued uncertain variable, X, is allowed to take. The second component, B, is a measure of reliability (certainty) of the first component. Typically, A and B are described in a natural language. Example: (about 45 minutes, very sure). An important issue relates to computation with Z-numbers. Examples: What is the sum of (about 45 minutes, very sure) and (about 30 minutes, sure)? What is the square root of (approximately 100, likely)? Computation with Z-numbers falls within the province of Computing with Words (CW or CWW). In this lecture, the concept of a Z-number is introduced and methods of computation with Z-numbers are outlined. The concept of a Z-number has a potential for many applications, especially in the realms of economics, decision analysis, risk assessment, prediction, anticipation, rule-based characterization of imprecise functions and relations and biomedicine.

**From Information to Foresight: Getting Beyond the Bits**

Wednesday, January 25, 12:00–1:00 pm, 310 Sutardja Dai Hall, Banatao Auditorium, UC Berkeley

Live broadcast: mms://media.citris.berkeley.edu/webcast **Laura Haas****,** IBM Fellow and Director, Institute for Massive Data, Analytics and Modeling, IBM Research

Data volumes are sky-rocketing, and new sources and types of information are proliferating; we can now track and obtain data faster than ever before. But data is only of value if you can extract insight from it—insights that let you solve your challenges, improve your processes, attract new clients, and be more nimble in your business. There is a real opportunity to harness this data and gain insight to improve our world—but to do so, we must do more than capture information. We must correlate and align information across sources, extract meaning from it, and leverage that meaning to create value. This talk will describe some of the challenges of capturing, integrating, and analyzing information and some of the progress that has been made in terms of runtimes and tools to support these tasks, as well as some ongoing research in this space. We will highlight some successful applications of these technologies in a variety of fields, and close with a proposal to work together to advance the state of the art in these technologies and in their application.

**LAPACK Seminar: Subspace Iteration with Random Start Matrix**

Wednesday, January 25, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley**Ming Gu,** UC Berkeley and LBNL/CRD

The power method and subspace iteration method can be used to find a few largest eigenvalues (or singular values.) It is well-known that their convergence rate critically depends on the separation of the eigenvalues (or singular values) as well as the start matrix. In this talk, we develop new convergence results for these methods, both for deterministic and random start matrices.

**EECS Colloquium: Re-Configurable Exascale Computing**

Wednesday, January 25, 4:00–5:00 pm, 306 Soda Hall (HP Auditorium), UC Berkeley**Steve Wallach,** Chief Scientist and co-founder of Convey Computers, an adviser to Centerpoint Venture partners, Sevin-Rosen, and Interwest, and a consultant to the US DOE ASC (Advanced Scientific Computing) program at Los Alamos

HPC research is focused on achieving Exaflop/ExaOP performance by 2020. Unlike reaching a Petaflop, the general consensus is that vastly new programming paradigms, hardware architectures, interconnects will be needed (as well as new power plants). This presentation will be focused on increasing uni-processor performance and the role application specific heterogeneous computing and compilers play in evolving processor architecture.

**Numerical Analysis of Linear and Nonlinear Eigenvalue Problems for Electronic Structure Calculations**

Wednesday, January 25, 4:10–5:00 pm, 939 Evans Hall, UC Berkeley**Eric Cances and Yvon Maday,** CERMICS and Paris VI

The numerical computation of the eigenvalues and eigenvectors of a self-adjoint operator on an infinite dimensional separable Hilbert space is a standard problem of numerical analysis and scientific computing, with a wide range of applications in science and engineering. Such problems are encountered in particular in mechanics (vibrations of elastic structures), electromagnetism and acoustics (resonant modes of cavities), and quantum mechanics (bound states of quantum systems).

Galerkin methods provide an efficient way to compute the discrete eigenvalues of a bounded-from-below self-adjoint operator A lying below the bottom of the essentialspectrum of A. On the other hand, Galerkin methods may fail to approximate discrete eigenvalues located in spectral gaps, that is between two points of the essential spectrum. In some cases, the Galerkin method cannot find some of the eigenvalues of A located in spectral gaps (lack of approximation); in other cases, the limit set of the spectrum of the Galerkin approximations of A contains points which do not belong to the spectrum of A (spectral pollution). Such problems arise in various applications, such as the numerical simulation of photonic crystals, of doped semiconductors, or of heavy atoms with relativistic models.

Another aspects of these problem is the large nonlinearity and the complexity that arise in the models derived from the Schrödinger model. The numerical analysis of these nonlinear eigenvalue problems makes it possible to understand the convergence behavior, when it works but also when and why the numerical algorithm lacks optimality. From this analysis corrections can be proposed and implemented.

We will present recent results on the numerical analysis of these problems, obtained in collaboration with **Rachida Chakir** (University Paris 6) and **Virginie Ehrlacher** (Ecole des Ponts and INRIA).

**Design by Transformation — Application to Dense Linear Algebra Libraries**

Thursday, January 26, 1:00–2:00 pm, 50F-1647**Robert van de Geijn, **University of Texas at Austin

The FLAME project has yielded modern alternatives to LAPACK and related effort. An attractive feature of this work is the complete vertical integration of the entire software stack, starting with low level kernels that support the BLAS and finishing with a new distributed memory library, Elemental. In between are layers that target a single core, multicore, and multiGPU architectures. What this now enables is a new approach where libraries are viewed not as instantiations in code but instead as a repository of algorithms, knowledge about those algorithm, and knowledge about target architectures. Representations in code are then mechanically generated by a tool that performs optimizations for a given architecture by applying high-level transformations much like a human expert would. We discuss how this has been used to mechanically generate tens of thousands of different distributed memory implementations given a single sequential algorithm. By attaching cost functions to the component operations, a highly optimized implementation is chosen by the tool. The chosen optimization invariably matches or exceeds the performance of implementations by human experts. We call the underlying approach Design by Transformation (DxT).

**Systems Challenges in Global Scale Map Rendering**

Friday, January 27, 11:00 am–12:00 pm, 380 Soda Hall, UC Berkeley**Yatin Chawathe,** Principal Software Engineer, Google

I will discuss the evolution of the Google Maps rendering infrastructure from a simple static rendering platform to an on-the-fly rendering infrastructure that is capable to handling hundreds of thousands of requests per second. This presentation will highlight the particular systems challenges with handling traffic at scale and keeping our maps fresh and up-to-date.

**About Computing Sciences at Berkeley Lab**

The

**Lawrence Berkeley National Laboratory**(Berkeley Lab)

**Computing Sciences**organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the

**Energy Sciences Network**, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The

**National Energy Research Scientific Computing Center**(NERSC) powers the discoveries of 7,000-plus scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are Department of Energy Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

**DOE’s Office of Science** is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.