A-Z Index | Phone Book | Careers

InTheLoop | 09.16.2013

September 16, 2013

NERSC Director Sudip Dosanjh Delivers Keynote at ParCo 2013

NERSC Director Sudip Dosanjh discussed “On the Confluence of Exascale and Big Data” in a keynote talk delivered Friday, Sept. 13, at ParCo 2013, the International Conference on Parallel Computing. The four-day conference, started in 1983, was held at the Technical University of Munich, home of the Leibniz Supercomputing Centre.

In his talk, Dosanjh noted that exascale computing and big data are sometimes viewed as orthogonal to one another, but said that exascale and big data face many similar technical challenges including increasing power/energy constraints, the growing mismatch between computing and data movement speeds, an explosion in concurrency and the reduced reliability of large computing systems. Even though exascale and data intensive systems might have different system-level architectures, the fundamental building blocks will be similar. Analyzing all the information produced by exascale simulations will also generate a big data problem. And finally, many experimental facilities are being inundated with large quantities of data as sensors and sequencers improve at rates that surpass Moore's Law.


Article Outlines NERSC’s Role in Developing Better Batteries for Electric Cars

Digital Manufacturing, an online newsletter published by the HPCwire group, posted a story on Sept. 12 describing NERSC’s role in helping scientists developing better batteries for electric vehicles. The featured article by Managing Editor Chelsea Lang looks at NERSC’s role in the Materials Project and research in developing alternative anodes that are eight times more efficient than lithium anodes for storing energy and the use of grapheme to create better capacitors.


David Brown Answers Five Questions on Applied Math and Supercomputing

As part of DOE's communications focus on supercomputing in September, CRD Director and applied mathematician David Brown answers five questions on how math makes supercomputers even more powerful tools for scientific discovery. Read the Q&A.


Distributed Merge Trees Making News

Government Computer News, a magazine covering, well, computer news related to the government, wrote an article about the distributed merge trees work done by Dmitry Morozov and Gunther Weber of CRD’s Visualization Group. “A team at the Energy Department lab has developed new techniques for analyzing huge data sets, by using an approach called “distributed merge trees” that takes better advantage of supercomputing’s massively parallel architectures,” writes Kevin McCaney in his Sept. 9 article.

The news was also picked up by ACM Technews and Scientific Computing, and is slated to appear Sept. 25 in International Science Grid This Week. The coverage stemmed largely from an article written by Linda Vu of the Computing Sciences communications team.


This Week’s Computing Sciences Seminar

A Sparse Inertia-Revealing Factorization

Monday, September 16, 10 – 11 a.m., Bldg. 50F, Room 1647

Alex Druinsky, Tel Aviv University, Israel

Abstract: I will present an algorithm for computing the inertia of large sparse symmetric matrices that offers guaranteed bounds on the fill produced during the computation. It is the only sparse inertia algorithm that offers such guarantees. The computation proceeds by applying a sequence of transformations that reduce the matrix to upper-triangular form, producing as a byproduct the sequence of determinants of the leading principal submatrices. From this sequence the inertia is easily computed using a formula derived from Sturm's theorem. The fill caused by the algorithm is bounded by the fill of the sparse QR factorization (but is often much smaller), and therefore the ordering algorithms that are used to preserve the sparsity in the QR and LU factorizations are effective in this case as well. 

The algorithm works well in practice. I will present experimental evidence showing that it is usually stable, although we have been able to construct synthetic matrices on which the computation fails. We have experimentally compared the performance of the algorithm to that of both the sparse QR code SparseSuiteQR and the symmetric indefinite factorization code MA57. The talk will describe these results and their implications.