A-Z Index | Phone Book | Careers

InTheLoop | 05.04.2015

May 4, 2015

Yelick Addresses National Science Bowl Contestants

Every spring, 16 teams of high school students converge on Washington, D.C., to compete in the National Science Bowl after winning their regional competitions. The four-day event coincides with National Science Day and this year, Computing Sciences Assistant Lab Director Kathy Yelick was an invited speaker for the event. Yelick presented “Saving the World with Computing,” in which she explained how scientific computing is tackling some of the globe’s biggest challenges.

Leading up to the finals in Washington, more than 9,500 high school students and 4,500 middle school students competed in 70 high school and 50 middle school regional Science Bowl tournaments. Each team consists of four students, one alternate and a teacher who serves as an adviser and coach. These teams face-off in a fast-paced question-and-answer format, being tested on a range of science disciplines. Among the finalists at this year’s competition was Yelick’s alma mater, Valley High School in West Des Moines, Iowa.

DOE created the National Science Bowl in 1991 to encourage students to excel in mathematics and science and to pursue careers in these fields.  More than 250,000 students have participated in the National Science Bowl throughout its 25-year history, and it is one of the nation’s largest science competitions.

Why We Need Math to Blow up Stars

Do you need math to blow up a star? Why bother? CRD scientist Ann Almgren answered those questions in less than 10 minutes at Berkeley Lab's Science at the Theater in Oakland last Wednesday.

Understanding supernovae is key to understanding the history of our universe and the origins of life, but scientists can't get up close to watch an exploding star. They can't climb inside one, either. What they can do is model supernovae on supercomputers using fluid dynamics equations: "for about two seconds," Almgren said. She and her colleagues at the Center for Computational Sciences and Engineering at Berkeley Lab developed mathematical techniques and codes that make it possible for scientists to model supernovae for much longer periods of time. Now scientists can better study and understand these important cosmic explosions. »Watch the video.

ESnet Inks Transatlantic Network Resiliency Agreement

At the Internet2 Global Summit last week, ANA-200G and ESnet announced a new agreement that improves the resiliency of the world’s fastest intercontinental network for research and education.

ANA-200G is a fully resilient 100 Gbps network that traverses the North Atlantic Ocean and supports data-intensive research and education applications. This network is funded by a group of national research and education networks (NRENs), including Internet2, NORDUnet, CANARIE and SURFnet. The agreement enables reciprocal backup between ANA-200G and ESnet’s 340 Gbps transoceanic infrastructure.

The collaboration is critical for supporting data flows from European and North American research instruments, institutions and individual researchers. While fiber cuts in subsea cable systems are rare, if one occurs, the downtime on the cable system typically lasts for weeks, due to the complexity of repair processes and weather uncertainties.

Both ANA-200G and ESnet's high-speed networks were in full production at the end of 2014. With this agreement they will function as one system in the unusual case of a major failure, creating unprecedented stability at a capacity never before seen between two continents. »Read more.

'Comet' Connects via ESnet, Internet2 100 Gbps Links

San Diego Supercomputer Center's "Comet" high-performance computer recently began early operations. The HPC system will feature new 100 Gbps connectivity to ESnet and Internet2. The high-speed connection will allow researchers to rapidly move data to SDSC for analysis and data sharing, and to return data to their institutions for local use.  »Read more.

DESI and NERSC: Mapping the Universe in 3D

In a recent interview, Berkeley Lab physicists Michael Levi and David Schlegel talked about building the largest ever 3D map of the universe using DESI (Dark Energy Spectroscopic Instrument) and supercomputers at NERSC.  »Read more.

This Week's CS Seminars

»CS Seminars Calendar

Simons Institute Open Lecture: Modern Coding Theory — Many Ideas, One Goal

Monday, May 4, *4:00pm - 5:00pm, Sutardja Dai Hall, Banatao Auditorium, UC Berkeley
Rüdiger Urbanke, Ecole Polytechnique Fédérale de Lausanne

Error correcting codes are ubiquitous. Every time we make a call, connect to WiFi, download a movie, or store a file, they help us get things right.

Over the years, the way we construct these codes has changed significantly. Initially, algebra brought structure to a previously intractable problem. Then lattices helped convey continuous-valued signals. Over the past twenty years deterministic codes made way for random sparse graphs with low-complexity message-passing decoding. More recently, Polar codes used the chain rule of mutual information to achieve capacity. And the latest contenders are spatially-coupled codes that exploit the physical mechanism that makes crystals grow to simultaneously achieve the capacity of a large family of communication channels.

I will describe how ideas from such diverse areas as abstract algebra, number theory, probability, information theory, and physics slowly made it from the blackboard into products, and outline the main challenges that we face today.

*Light refreshments will be served before the lecture at 3:30pm.

Matrix Computations and Scientific Computing Seminar: Matrix Factorization for Movie Recommendations

Wednesday, May 6, 11:00 am-12:00pm, 380 Soda Hall, UC Berkeley
Harald Steck, Netflix

The Netflix recommender system for movies and TV shows is comprised of an ensemble of models. The talk will focus on matrix factorization models. Users' feedback data (e.g., played or rated titles) can be represented in a matrix involving users and movies/TV shows. Such a matrix has several interesting properties: (1) it is sparse (i.e., each user rated only a small number of titles), (2) it is tall and thin (i.e., there are many more users than titles), and (3) there are various selection biases in the data. The latter means that there is information in which entries are present in the sparse matrix (besides the information in the entries' values). An example of a selection bias is that a user tends to rate items that they like or know, resulting in an under-representation of low rating values in the data. Another example is that users tend to rate movies with similar release-years together. I will discuss different matrix factorization models tailored to these properties of the data. The models are optimized by stochastic gradient descent toward a personalized ranking of the movies for each user, rather than toward predicting missing entries in the matrix.

Sponsored by the Department of Mathematics at UC Berkeley.

Digital Neuromorphic Systems

Thursday, May 7, 11:00am-12:00 pm, 540 Cory Hall, UC Berkeley
Rajit Manohar, Cornell University

The field of neuromorphic VLSI aims to replicate the functionality of biological systems using conventional electronic circuits. In collaboration with IBM research, we recently developed TrueNorth---a low power single-chip million neuron neuromorphic system implemented with digital self-timed circuits. We present the six-year journey: what we started out to build, what we learned along the way, and the final design embodied in TrueNorth.

Sponsored by the Electrical Engineering and Computer Sciences (EECS) Department at UC Berkeley.