A-Z Index | Phone Book | Careers

InTheLoop | 11.06.2010

November 6, 2010

Berkeley Lab Expertise Highlighted in SC10 Technical Program

When SC10 convenes Nov. 13 in New Orleans, La., Berkeley Lab researchers will be making significant contributions to the conference program. Our staff will participate in three tutorials, join in four panel discussions, and lead five Birds-of-a-Feather sessions; and James Demmel will accept the IEEE’s Sidney Fernbach Award.

A special issue of Computing Sciences News provides complete information on LBNL activities at SC10. Watch for it in your email today, or click the link on the CS home page.

Fenius Networking Demo Scores a Global First

ESnet has been promoting global interoperability in virtual circuit provisioning by collaborating on the Fenius project. Recently this effort took another step forward by enabling four different provisioning systems to cooperate for the Automated GOLE demonstration at the GLIF workshop held at CERN in Geneva, Switzerland.

The demonstration was a complete success and resulted in what is believed to be a global first: a virtual circuit was completely automatically set up through five different networks and four different provisioning systems. And it was completed in a short amount of time — it only took about five minutes from the initiating request until packets were flowing from end to end. Read more.

Annette Greiner Discusses NEWT at InterLab 2010

Annette Greiner of NERSC’s Outreach, Software and Programming Group gave a talk last week at InterLab 2010, which was held November 1–4 at Oak Ridge National Laboratory. InterLab is a four-day workshop aimed at web designers, web application developers, online communications professionals, and managers of internet resources throughout the Department of Energy complex.

Greiner’s talk, “NEWT: Bringing High-Performance Computing to the Web,” discussed the technology underlying the web-based science gateways being developed at NERSC. The underlying API, NERSC Web Toolkit (NEWT), is a programming interface targeted at web developers who want to connect web services and applications to HPC.

Seaborg Fellow Is Named Research Scientist

Last week Esmond Ng, leader of the Scientific Computing Group (SCG) in CRD, announced that Maciej Haranczyk, a Seaborg Fellow in SCG since 2008, is now a Research Scientist in the group. Haranczyk will continue his research in computational chemistry, combinatorial chemistry, and chemoinformatics.

2011-2012 DOE CSGF Application Process Now Open

Fellows in the Department of Energy Computational Science Graduate Fellowship (DOE CSGF) program are using high performance computing to better understand fundamental properties of the world and universe around us, and to solve complex problems in areas of national importance, such as climate change and sustainable energy sources.

The online DOE CSGF application is now available. Applications for the 2011 class of fellows will be accepted through Tuesday, January 11, 2011. If you’d like to learn more about the application process, required materials or frequently asked questions, please visit the “How to Apply” section of the CSGF website.

This Week’s Computing Sciences Seminars

dCache.org and the European Efforts to Establish a Sustainable E-Infrastructure
Monday, Nov. 8, 10:00–11:00 am, 50F-1647
Patrick Fuhrmann, Deutsches Elektronen-Synchrotron (DESY)

Beginning of 2010, the European Commission, in collaboration with its national partners, procured more than 110 million Euros to consolidate their e-infrastructure by launching 6 projects covering the harmonization and standardization of HPC and HTC middleware, the evaluation of cloud computing potentials, the integration of GLOBUS and the establishment of a sustainable organizational framework to orchestrate those efforts and to interface this infrastructure to the various European scientific communities. The harmonization and standardization of the already existing European middleware stacks will be coordinated by the second largest of those 6 projects, namely the European Middleware Initiative, EMI. EMI will spend 23 Million Euros and over 3 years will, according to its web site, improve the reliability, usability and stability of the middleware services (gLite, ARC, UNICORE and dCache), closely listening to the requirements of users and infrastructure providers.

This presentation will briefly describe the structure and objectives of the European Grid Infrastructure and of EMI, its main software provider, as well as on the contribution of dCache.org to those challenging initiatives.

A Tour of Modern “Image Processing”
Monday, Nov. 8, 1:30–2:30 pm, 400 Cory Hall (Hughes Room), UC Berkeley
Peyman Milanfar, UC Santa Cruz

Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include Moving Least Square (from Graphics), the Bilateral Filter and Anisotropic Diffusion (from Vision), Boosting and Spectral Methods (from Machine Learning), Non-local Means (from Signal Processing), Bregman Iterations (from Applied Math), Kernel Regression and Iterative Scaling (from Statistics). While these approaches found their inspirations in diverse fields of nascence, they are deeply connected.

In this talk, I will present a practical and unified framework for understanding some common underpinnings of these methods. This leads to new insights and a broad understanding of how these diverse methods interrelate. I will also discuss several applications, and the statistical performance of the resulting algorithms. Finally I briefly illustrate connections between these techniques and classical Bayesian approaches.

A Model-Based Classification Approach for the Visualization of High-Dimensional Spatial Data
Tuesday, Nov. 9, 9:30–10:30 am, 50F-1647
Roberto Sisneros, Oak Ridge National Laboratory

Today’s scientific simulation at petascale and beyond has offered grand opportunities of innovative visualization research. Often mentioned in discussions with domain scientists is the need to effectively study interactions among multiple variables in a simulation. In this talk, I will present a novel method that focuses on the calculation of multivariate models within scientific simulation datasets. Specifically, I will explore the considerations of such a problem: both developing an appropriate visualization approach and framing the issue such that model specification has high utility and usability. The visualization approach is derived from a point classification algorithm that summarizes many variables of a dataset into a single image via the creation of attribute subspaces. These attribute subspaces are then extended to encompass multivariate models. I will also discuss some of the details involved in the implementation of the approach.

Par Lab Seminar: Autotuning for the Intel Math Kernel Libraries
Tuesday, Nov. 9, 11 am–12:30 pm, 438 Soda Hall (Wozniak Lounge), UC Berkeley
Greg Henry, Intel

This talk will describe the auto-tuning efforts in the Intel(R) Math Kernel Library. We will discuss some of the unique perspectives we have, strategies we use, and problems that we have yet to address. We’ll cover our objectives in assessing new research in this area. We’ll talk about both our successes and the places where we have room to grow, plus give a little insight into some techniques that work particularly well for us, include strategies that are possibly unique to our specific needs. We also hope to use this seminar also as an opportunity to learn if there are Par Lab collaborations that can take place in any one of the many domains in Intel MKL: BLAS, LAPACK, ScaLAPACK, Direct Sparse Solvers, Iterative Sparse Solvers, FFTs, Statistics, etc. We are interested in many core optimization strategies from different threading models to different runtime environments.

LAPACK Seminar: Sparse Modeling: A Statistical View
Wednesday, Nov. 10, 11:10 am–12:00 pm, 380 Soda Hall, UC Berkeley
Bin Yu, UC Berkeley

Extracting useful information from high-dimensional data is the focus of today’s statistical research and practice. After broad success of statistical machine learning on prediction through regularization, interpretability is gaining attention and sparsity has been used as its proxy. With the virtues of both regularization and sparsity, Lasso (L1 penalized L2 minimization) and its extensions have been very popular recently.

In this talk, I would like to give an overview on aspects of statistical theory and practice of sparse modeling that includes Lasso and its extensions. First, I will explain what useful insights have been learned from model selection consistency analysis of Lasso and an l1 penalized sparse covariance estimation method when p>>n. Second, I will present results on L2-estimation error (when p>>n) and insights learned for a class of M-estimation methods with decomposable penalties. As special cases, our latter results cover Lasso, L1-penalized GLMs, grouped Lasso, and low-rank sparse matrix estimation. (This talk is based on joint works with co-authors Zhao, Meinshausen, Ravikumar, Raskutti, Wainwright, and Neghban.)

Adaptive Computation of Electromagnetic Problems
Wednesday, Nov. 10, 1:30–2:30 pm, 50A-5132
Zhiming Chen, Institute of Computational Mathematics, Chinese Academy of Sciences

We report our recent progress in developing adaptive finite element methods based on a posteriori error estimate for solving electromagnetic problems. We first develop an adaptive edge element method with multilevel preconditioning for solving the time-harmonic Maxwell equation with singularity. The total computational cost is optimal in the sense that the energy error is in proportion to N^{-1/3} using O(N) operations. We consider the adaptive immersed interface finite element method for solving electromagnetic problems with discontinuous coefficients in which the meshes do not need to fit the interfaces. We also consider the adaptive finite element method for solving the eddy current model with voltage excitations for complicated three dimensional structures.

Demonstration of a CFD Tool Used for Data Center Modeling, Thermal Analysis and Operational Management
Thursday, Nov. 11, 12:00 pm, 90-3122
Saket Karajgikar, Future Facilities, Ltd.

Every Data Center built today is designed with a total capacity in mind, as well as a plan to grow into this final-day load. On a daily basis, Data Center Operations/Management professionals work toward keeping their Data Center as close to this plan as possible by concurrently managing the available power, space, cooling and airflow resources. Unfortunately, lack of communication and information, the pace of change and difficulty in coping with the ever growing power densities of IT equipment can prevent a Data Center from operating to plan. This results in unrealized capacity and early end of life for the data center.

This presentation will feature a suite of software tools, based on computational fluid dynamics (CFD) analysis, that is designed specifically to address the challenges of designing and managing the modern Data Center. The suite integrates a CAD-like model building interface, a CFD analysis capability that is tuned for the data center application, a library of data center modeling objects and links to related data center design and operational management tools that open new lines of communication between IT and facilities. This collection of capabilities overcomes two major challenges: effective use CFD analysis in a business environment and active management of airflow which traditionally is dealt with reactively. The result is a solution that helps the Data Center operate to plan and realize the full potential of the initial investment.

IPv6 — No Longer Optional AND Implementing IPv6 at the American Registry for Internet Numbers: An Experience Report
Friday, Nov. 12, 12:00 pm, 90-3122
Matt Ryanczak and Richard Jimmerson, American Registry for Internet Numbers (ARIN)

The available pool of IPv4 addresses — less than 7% — will run out in about a year. The solution is IPv6. While this seemingly endless number of addresses will help support next-generation, IP-based networks and services, many companies have been slow to adopt IPv6 because of cost, and the need for bridging technology to make IPv4 and IPv6 systems compatible. The Lab began work on a IPv6 trial in early 2009.

This dual presentation should be of interest to both network engineers and those with an interest in the smart grid or networked building technologies.

Richard Jimmerson, CIO of the American Registry for Internet Numbers (ARIN), will discuss the key considerations for and benefits of IPv6 adoption. Richard will review regional and global IPv4 depletion and IPv6 adoption statistics, address allocation trends, and resources available to help system and network administrators prepare.

ARIN finished enabling all systems and services for IPv6 in 2008. Matt Ryanczak, Network Operations Manager of ARIN, will describe, in more technical detail, how they started from a single IPv6-only T1 circuit to their Chantilly, Virginia office, and accomplished a full IPv6 deployment.

ARIN is the nonprofit corporation that manages the distribution of Internet number resources, including IPv4 and IPv6 addresses and Autonomous System Numbers (ASNs), to Canada, many Caribbean and North Atlantic islands, and the United States.

Link of the Week: Young Artists, Scientists Think Logically, Creatively

Do scientists and artists think differently? Fifty years ago, novelist/physicist C.P. Snow famously fretted that the two disciplines were drifting apart, and subsequent research suggested he was onto something. Science students tended to excel at logical, analytical thinking, while budding artists scored highest in tests measuring imagination and creativity.

But a newly published study of seniors at one British university reports that distinction has virtually vanished over the past five decades. Writing in the journal Thinking Skills and Creativity, Peter K. Williamson of the University of Derby reports “no differences were found in the problem-solving skills of arts and science students.” Read more.

About Computing Sciences at Berkeley Lab

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.

Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.