A-Z Index | Phone Book | Careers

InTheLoop | 08.11.2014

August 11, 2014

ESnet Names Inder Monga as Division Deputy, Patty Giuntoli to Lead Network Engineering

Patty Giuntoli is the new leader of ESnet’s Networking Engineering Group, taking over for Mike Bennett, who retired earlier this year. Giuntoli will continue to serve as area lead for Networking and Systems.

In announcing her new role, ESnet Director Greg Bell said that Giuntoli has significant experience in project management, process management and global operations, having led WAN networking groups at Oracle Corp. and Kaiser Permanente, adding “she has the perfect skill set for taking on this new position.” Before joining ESnet, Giuntoli was head of the Infrastructure Department in the Lab’s IT Division.

Bell also announced that ESnet’s Chief Technology Officer Inder Monga has been appointed division deputy overseeing network research and technology. “This is in recognition of the critical role that research, development and innovation play in our ability to improve science outcomes,” Bell said.

In his role as division deputy, Monga will continue to oversee the Advanced Networking Technologies Group, the Tools Group and the Office of the CTO. Prior to joining ESnet, Monga worked for Wellfleet Communications and Canadian telecom company Nortel, where he focused on application and network convergence.

NERSC Launches Next-Generation Code Optimization Effort

With the promise of exascale supercomputers looming on the horizon, much of the roadmap is dotted with questions about hardware design and how to make these systems energy efficient enough so that centers can afford to run them. Often taking a back seat is an equally important question: will scientists be able to adapt their applications to take advantage of exascale once it arrives?

The Department of Energy’s (DOE) National Energy Research Scientific Computing Center (NERSC), located at Lawrence Berkeley National Laboratory, is working to address this gap with the NERSC Exascale Science Applications Program (NESAP), a robust application readiness effort launched to support NERSC’s next-generation supercomputer, Cori. Cori, a Cray XC system slated to be deployed at NERSC in 2016, is intended to meet the growing computational needs of DOE’s science community and serve as a platform for transitioning users to energy-efficient, manycore architectures. NESAP—which will include partnerships with 20 application code teams and technical support from NERSC, Cray and Intel—was created to make this transition run smoothly. »Read more.

Register Now for 2014 Parallel Programming Bootcamp Held August 18-20

Sponsored by the Parallel Computing Lab (ParLab) on the campus of UC Berkeley, The 2014 Short Course on Parallel Programming ("boot camp") will be held August 18 through 20. This course is intended to offer programmers a practical introduction to parallel programming techniques and tools on current parallel computers, including shared memory/multicore, GPU/manycore, distributed memory and cloud computing. It will provide an introduction to parallel architectures and programming issues, a thorough exposure to languages and tools for shared memory programming (including hands-on experience), a presentation of high-level parallel programming patterns and libraries that can greatly simplify programming, an overview of programming on other important parallel architectures (GPUs, Clouds, and distributed memory machines), and in-depth discussions of a variety of exciting parallel applications from image recognition, computer music, and other areas. Free registration is required by August 18. »Read more.

Horst Simon Comments on IBM 'Brain Chip' in NY Times

The New York Times called on Deputy Director Horst Simon to comment on TrueNorth, a new chip IBM researchers unveiled in an article published in the journal Science last week. Simon, former director of NERSC and head of Computing Sciences at Berkeley Lab, was one of a team awarded the 2009 Gordon Bell Prize for the first full-scale simulation of a cat-sized brain cortex.  »Read more.

This Week's Computing Sciences Seminars

Large-scale Hartree-Fock Calculations on Heterogeneous Clusters

Tuesday, August 12, 2–3 p.m., Bldg. 50F Room 1647
Edmond Chow, School of Computational Science and Engineering, Georgia Institute of Technology

This talk focuses on the computation and communication bottlenecks in Hartree-Fock self-consistent field iterations on large heterogeneous clusters. We present algorithms and software components for addressing these bottlenecks. These include: 1) new load balancing techniques, 2) dynamically scheduling tasks onto CPUs and coprocessors simultaneously, 3) diagonalization-free algorithms for computing the density matrix. We demonstrate solutions to these challenges on Stampede and Tianhe-2 supercomputers, using Intel Xeon Phi coprocessors. We performed a quantum mechanical study of protein-ligand binding to examine the suitability of truncated model systems. The largest simulations involved 2938 atoms and 27394 basis functions.

Joint work between Georgia Institute of Technology, Intel Corporation, and National University of Defense Technology, China.

A New Geometric Approach to Topic Modeling and Discovery

Tuesday, August 12,  3 4 p.m., 400 Cory Hall on the UC Berkeley Campus
Prakash Ishwar, Boston University

In this talk I will present a new algorithm for topic discovery based on the geometry of cross-document word-frequency patterns. The geometric perspective gains significance under the so-called separability condition that posits the existence of novel-words that are unique to each topic. The algorithm utilizes random projections to identify novel words and associated topics. The key insight here is that the maximum and minimum values of cross-document frequency patterns projected along any direction are associated with novel words. In contrast to ML and Bayesian approaches that require solving non-convex optimization problems using approximations or heuristics, the new algorithm is convex, asymptotically consistent, and has provable performance guarantees. While our sample complexity bounds for topic recovery are similar to the state-of-art, the computational complexity of our scheme scales linearly with the number of documents and the number of words per document. We present several experiments on synthetic and realworld datasets to demonstrate qualitative and quantitative merits of our scheme. This talk is based on joint work with Ding, Rohban, and Saligrama at Boston University.

Statistical learning of sparse neural representations for speech production

Thursday, August 14, 121 p.m., Bldg. 50F Room 1647
Kristofer Bouchard, Visualization Group, Lawrence Berkeley National Laboratory

The notion of sparsity, in which the output of a system depends only on a small number of its inputs, has proven invaluable to statistical learning of natural signals. Indeed, the role of sparsity in cortical processing of sensory signals is well established: elementary features for sensory processing are encoded by sparsely occurring neural activity patterns. In contrast, it is poorly understood if and how sparse representations are utilized in cortical circuits for generating movements. Here I will investigate sparsity in sensorimotor representation of speech production. Speech production is well described on the behavioral level by linguistics studies, but poorly understood cortically. Intracranial electrocorticography (ECoG) provides a unique opportunity to record high spatio-temporal resolution field potentials (FPs) directly from the surface of sensorimotor cortex in humans with broad spatial coverage. I will describe a method for the unbiased estimation of sparse model parameters, and apply it to the problem of decoding speech production from concurrently recorded brain activity. Additionally, application of independent component analysis to spatial-temporal patterns of neural activity during speech production suggests the existence of sparse representations that span multiple temporal scales. Motivated by this observation, I will outline a dynamical systems analysis approach to understand multi-scale dynamics in neural circuits.

CS Exascale Seminar: Silicon Photonics for Exascale

Friday, August 15, 11 a.m. 12:30 p.m., Bldg. 50B Room 4205
Ke Wen, Graduate Student Research Assistant, Future Technologies Group, Lawrence Berkeley National Laboratory

Driven by many-core architectures and massive parallelism, the scalability of high-performance computing systems is increasingly challenged by the vastly growing energy costs and inadequate bandwidths provided by the inter-node communications substrates. For these highly parallel systems, realizing power efficient scaled computation performance is dominantly determined by the data movement capabilities. Application of traditional optical devices at short distances (< 10m) has so far been limited by bulkiness, high cost and relative inefficiency of endpoint components. Silicon photonics offer the possibility of delivering the needed communications bandwidths to match the growing computing powers of HPC systems with extremely scalable energy efficiencies and cost-effective components. However, the insertion of photonic interconnect is not a one-for-one replacement. A major gap exists between the properties separately demonstrated by silicon photonic device technologies and our understanding of how they can be combined to fully leverage the unique optical data movement capabilities toward building new generations of interconnected system architectures.

This talk will give an introduction to basic working principles, state-of-the-art applications and technical challenges of silicon photonic technologies. We will also talk about how a flattened high-bandwidth communication substrate could potentially impact the computing stack and the challenges of incorporating silicon photonics in future exascale systems. Finally we will show modeling and optimization efforts towards addressing some of these challenges.

Neural Dust and Neural Interfaces

Saturday, August 16, 11 a.m. – 12 p.m., 159 Mulford Hall on the UC Berkeley Campus
Michel M. Maharbiz, Department of Electrical Engineering and Computer Science, University of California, Berkeley

A major technological hurdle in neuroprosthetics is the lack of an implantable neural interface system that remains viable for a lifetime. I will discuss the basics of extracellular neural recording, discuss the state of the art in cortical neural recording and introduce Neural Dust, a concept developed with Elad Alon, Jose Carmena and Jan Rabaey, which aims to develop a tetherless method to remotely record action potentials from the mammalian cortex.

This free public talk is presented as part of the monthly "Science@Cal Lecture Series."