A-Z Index | Phone Book | Careers

InTheLoop 10.26.2015

October 26, 2015

Lab Staff to Ply Expertise at SIAM Conference on Applied Linear Algebra

Xiaoye Sherry Li of CRD's Scalable Solvers Group will give an invited talk at the 2015 SIAM Conference on Applied Linear Algebra being held Oct. 26-30 in Atlanta. She and 10 other Computing Sciences staff contributed papers and posters to the conference program. The conference, held every three years, is the leading meeting for researchers in the linear solvers community.

Attending SC15? Consider Taking a Turn Staffing the DOE Booth

Planning for the joint DOE booth at SC15 is moving right along and the planning committee is now looking for lab folks to help staff the booth during the exhibit hours. Volunteers are needed to help to steer visitors to electronic posters, job lists, demos and discussions, as well as to talk about their work in general. Fourteen labs are represented at the booth, so visitors really need some guidance to see the best material.

If you could spare a couple of hours on Tuesday, Wednesday or Thursday, you can help the lab take advantage of a good opportunity to tell our story.  Volunteers will be given a brief overview of the booth and materials for guidance. Questions? Contact Jon Bashor, jbashor@lbl.gov.

Reminder: Register Now for Nov. 12 Wang Hall Dedication

All Lab employees are invited to attend the dedication of Computing Sciences' new home, Wang Hall (also known as the Computational Research and Theory facility, or CRT). Starting at 1 p.m. on Thursday, Nov. 12, a dedication ceremony will be followed by tours of the building and a symposium on “Pioneering the Next Computing and Internet Frontier for Scientific Discovery.” The dedication and symposium will be held on the fourth floor of the building prior to installation of the furniture, so everyone in the lab community is invited to attend. Please visit http://cs.lbl.gov to register and see a detailed agenda.

Reminder: Nov. 4 BIDS Panel to Address Future of Open Science and Scholarly Publishing

The landscape of scholarly communication is changing rapidly as calls for more open research and scholarly communication continue to grow. The Berkeley Institute for Data Sciences (BIDS) is sponsoring a panel on the future of open science and open publishing on Monday, November 2, 10am-12pm in the Banato Auditorium (310 Sutardja Dai Hall) on the campus of UC Berkeley.

Panelists include

  • Ann Gabriel, vice president, Academic & Research Relations for publisher Elsevier;
  • Laurel Haak, executive director for ORCID, a non-profit creating an open registry of unique researcher IDs (independent of for-profit publishers);
  • Jeff MacKie-Mason, university librarian for UC Berkeley; and
  • Dan Morgan, digital science publisher for the University of California Press.

They will present their organizations' initiatives surrounding open science and communication and discuss their visions for the future landscape of academic publishing. An open discussion, including audience questions, will follow.

This Week's CS Seminars

MAC-Layer Algorithm Designs for Hybrid Access Network

Tuesday, October 27, 10–11am, Bldg. 50B, Room 1237 (NOC)

Anu Mercian, Arizona State University

Access Networks provide the backbone to the Internet, connecting the end-users to the core network thus forming the most important segment for connectivity. Access Networks have varied physical layer medium such as fiber cables stand-alone or along with DSL and/or wireless links, collectively called Hybrid Access Networks. We explore hybrid access networks, providing Medium Access (MAC) layer solutions to specific challenges for different network deployments. In this seminar, we understand Passive Optical Access Network architecture, introduce Multi-thread polling bandwidth allocation algorithm for long-range Passive Optical Network (LR-PON) which utilize the propagation delay incurred by lengthy fiber cables, and also introduce Gated Flow control technique to reduce buffer requirement at the drop-point in PON-xDSL Hybrid Access Networks. We consequently observe the enhanced performance of the techniques in comparison with existing techniques based on simulation analysis.

Auto-tuning and Language Abstractions for GPUs

Wednesday, October 28, 2–3pm, Bldg. 50F, Room 1647

Frank Mueller, North Carolina State University

This talk introduces search and optimization techniques for auto-tuning nearest-neighbor computations on GPUs. We auto-generate multi-kernel codes for heterogeneous accelerator environments that deliver performance close to handcrafted codes. Next, a hierarchical data parallel language is introduced that improves coding productivity for hierarchical data parallelism while maintaining performance within a source-to-source translation scheme. Finally, we show that the trade-offs between scratch-pad memories and caches on GPUs are application dependent and require a close understanding of the underlying architecture, thread parallelism and memory hierarchy.


To begin AUDIO conference:

Dial Toll-Free Number: 866-740-1260 (U.S. and Canada)

International participants dial: +1 303-248-0285 (or Toll-Free via the web http://www.readytalk.com/intl)

Enter 7-digit access code: 4952431, followed by “#”

Place your phone on mute.

Data-driven stochastic parametrization and dimension reduction in nonlinear dynamics

October 28, 3:30–4:30pm, 939 Evans Hall, UC Berkeley

Fei Lu, UC Berkeley

Prediction of high-dimensional nonlinear dynamic systems is often difficult when only partial observations are available, because such systems are often expensive to solve in full and the initial data will be incomplete. The development of reduced models for the observed variables is thus needed. The challenges come from the nonlinear interactions between the observed variables and the unobserved variables, and the difficulties in quantifying uncertainties from discrete data. We address these challenges by developing discrete nonlinear stochastic reduced systems, in which one formulates discrete solvable approximate equations for the observed variables, and uses data and statistical methods to account for the impact of the unobserved variables. A key ingredient in the construction of the stochastic reduced systems is a discrete-time stochastic parametrization based on inference of nonlinear time series. We demonstrate our approach with some model problems, including the Lorenz 96 system and the Kuramoto-Sivashinsky equation. This is joint work with Alexandre Chorin and Kevin Lin.

BIDS Data Science Lecture: Computational Thinking and the Pedagogy of Computable Content

Friday,October 30, 1–2:30pm, 190 Doe Library, UC Berkeley

Lorena A. Barba, George Washington University

Last year, Barba gave a keynote that launched from the idea of “computational thinking” and then put forward that computing generates new knowledge (which is why we care about reproducibility in computational science) and proposed that computing is hence a form of learning. She gained this insight from observing students work with IPython (now Jupyter) notebooks in her classroom and by reflecting on the point of view of connectivist knowledge. Connectivism posits that knowledge is created by interconnecting individuals in a community; it is distributed across a network of connections. Learning happens through interactions and conversations. Computing can thus be a way to interact, a form of conversation with the system under study, and so it enters in the learning process. Computational thinking, as popularized in the last decade, is an approach to problem solving inspired by computer science. In comparison to how the term was originally used by Seymour Papert in Mindstorms (1980), the contemporary definition seems shallow. Papert said, “My interest is in universal issues of how people think and how they learn to think.” He imagined that the learning environment could be reshaped by computing. Tools like Jupyter notebooks are a new opportunity to realize that dream. They are what I call computable content: educational content made powerfully interactive via compute engines in the learning platform.

More Upcoming Seminars

NMF and SVD: Two Cases of Parallelization

Monday, November 2, 1-2pm, Bldg. 50F, Room 1647

Marian Vajtersic, University of Salzburg, Austria and Mathematical Institute, Slovak Academy of Sciences, Bratislava, Slovakia

This talk will be devoted to the parallelization of two matrix factorizations: Nonnegative Matrix Factorization (NMF) and Singular Value Decomposition (SVD). Both of them arise in applications in which large matrices have to be represented in the form of smaller factors which are more suitable for further processing.

One such example is when the matrix is large and has only nonnegative entries. The goal of NMF is to represent such matrices in an approximate way as the product of two significantly smaller matrices. NMF has the nice property that the resulting matrix factors are also nonnegative. Our approach to compute NMF is based on the Newton method, where in each iteration both approximate factors are computed in alternating manner.

The other method is SVD. Its importance for computational practice is well known, and fast methods for SVD deserve serious attention nowadays. Our focus will be concentrated on a problem with large dense matrices. The method considered is the block Jacobi method, in both of its variants: the two- and one-sided ones.

Due to the complexity of the problems, for both of these factorizations parallelism is inevitable, especially when the matrix size is large. We will show our original parallel solutions in more detail, discuss their properties, including experiments, and conclude with some remarks concerning further research.