A-Z Index | Phone Book | Careers

InTheLoop 06.20.2016

June 20, 2016

NERSC Readies for Cori Phase 2's July Delivery

For the past year, NERSC staff have been preparing users for the arrival of Cori Phase 2, the second phase of its newest supercomputer. Cori consists of more than 9,300 nodes containing Intel’s Xeon Phi Knights Landing processor, which was officially unveiled today, June 20, at the International Supercomputer Conference in Germany. The first compute cabinets are scheduled to arrive in July.

When fully installed, Cori will be the largest system for open science based on Knights Landing processors. The Knights Landing nodes will comprise phase two of the Cori system.

To ensure that a significant number of its 6,000 users could make the most effective use of this new manycore architecture, NERSC staff selected 20 leading applications for the NERSC Exascale Scientific Applications Program (NESAP), a collaborative effort that partners NERSC, Intel and Cray experts with code teams across the U.S. Lessons learned from working with the 20 NESAP codes are being used to develop an optimization strategy that the rest of the user base can quickly adopt.

“Application readiness efforts are critical for enabling ground-breaking science on our HPC systems as we move toward exascale. For the past year we’ve been working with these 20 teams to optimize their codes for Cori, so that when the machine arrives, they are ready to take advantage of the many capabilities the new hardware offers,” said Jack Deslippe, acting head of NERSC’s Application Performance Group.

Post-doc Friesen Honored for Outstanding Dissertation

NERSC post-doctoral researcher Brian Friesen has been awarded the University of Oklahoma (OU) Provost's Dissertation Prize in Science and Engineering. His is the first physics and astronomy dissertation to win the university's prize in at least twenty years. 

Friesen modeled spectroscopic properties of type Ia supernovae months to years after explosion. He and collaborators compared the data from these models with a variety of spectroscopic observations of supernovae. Their work led to several new insights into spectrum formation in type Ia supernovae at late times, which had until then received little attention in the research community.

As a NERSC Exascale Science Applications Program (NESAP) post-doc, he works with Ann Almgren and others in the Center for Computational Sciences and Engineering (CCSE) in the Computational Research Division researching parallel computation in BoxLib, a software framework for massively parallel block-structured adaptive mesh refinement (AMR) codes.

ESnet Internship Leads to Book Chapter on Energy Efficiency for Datacenters, Networks

Just as research and education networks have expanded the capabilities of scientists, a 2011 ESnet internship inspired Baris Aksanli to broaden the scope of his research and led to the coauthorship of a book chapter.

Aksanli, who was investigating how to shrink the carbon footprint of datacenters and networks with renewable energy resources, began looking into also improving energy efficiency using these resources while at ESnet. His work culminated in a chapter in a new book, Computational Sustainability, edited by Joerg Laessig, Kristian Kersting and Katharina Morik and  published on May 30 by Springer. Aksanli, along with Jagannathan Venkatesh and Tajana Simunic Rosing of UC San Diego and ESnet’s Inder Monga collaborated on the chapter, “Renewable Energy Prediction for Improved Utilization and Efficiency in Datacenters and Backbone Networks.”

During his ESnet internship, Aksanli was a Ph.D. student in computer science and engineering at the University of California San Diego. Since submitting the chapter a couple of years ago, Aksanli has graduated and will start working at San Diego State University as an assistant professor in August 2016. His research interests include energy efficient cyber physical systems, human behavior modeling in the "internet of things" and big data for energy-efficient embedded systems.

Vote for NERSC's Student Cluster Competition Team!

Cast your vote for NERSC's Student Cluster Competition Team as "Fan Favorite" at ISC16. NERSC is fielding its first ever student team this year. Better yet, it's an all-female crew! Voting closes at 8am, Wednesday.

This Week's CS Seminars

»CS Seminars Calendar

Monday, June 20

Toward Intelligent Restarted Linear Algebra Methods for Extreme Computing
11am – 12pm, Wang Hall - Bldg. 59, Room 3101
Serge G. Petiton, Maison de la Simulation/CNRS, and University Lille 1, Sciences et Technologies, France

Exascale hypercomputers are expected to have highly hierarchical architectures with nodes composed by lot-of-core processors and accelerators. The different programming levels (from clusters of processors loosely connected to tightly connected lot-of-core processors and/or accelerators) will generate new difficult algorithm issues. New methods should be defined and evaluated with respect to modern state-of-the-art of applied mathematics and scientific methods.

Krylov linear methods such as GMRES and ERAM are now heavily used with success in various domains and industries despite their complexity. Their convergence and speed greatly depends on the hardware used and on the choice of the Krylov subspace size and other parameters which are difficult to determine efficiently in advance. Moreover, hybrid Krylov Methods would allow reducing the communications along all the cores, limiting the reduction only through subsets of these cores. Added to their numerical behaviours and their fault tolerance properties, these methods are interesting candidates for exascale/extreme matrix computing. Avoiding communication strategies may also be developed for each of the instance of these methods, generating complex methods but with high potential efficiencies. These methods have a lot of correlated parameter which may be optimized using auto/smart-tuning strategies to accelerate convergence, minimize storage space, data movements, and energy consumption.

Thursday, June 23

​Data Intensive Computing Ecosystems
10:30 – 11:30a.m., Bldg. 50B Room 4205
Amy Apon, Complex Systems, Analytics, and Visualization Institute and Clemson University

The challenges of distributed and parallel data processing systems include heterogeneous network communication, a mix of storage, memory, and computing devices, and common failures of communication and devices. These complex computer systems of today are difficult, if not impossible, to model analytically.   Experimentation using production-quality hardware and software and realistic data is required to optimizing these systems and for understanding system tradeoffs needed for robust performance.  At the same time, experimental evaluation has challenges, including access to hardware resources at scale, robust workload and data characterization, configuration management of software and systems, and sometimes insidious optimization issues around the mix of software stacks or hardware/software resource allocation. This talk highlights experimental computer science research from the Data Intensive Computing Ecosystems laboratory at Clemson University.  Highlighted research projects include the design and implementation of the Dynamic AWS HPC project, infrastructures and workload characterization for connected mobility and Intelligent Transportation Systems, performance considerations of network functions virtualization using containers, and recent work in scalable approaches to topic modeling for streaming data systems.