A-Z Index | Phone Book | Careers

InTheLoop 07.11.2016

July 11, 2016

Inder Monga Named Director of ESnet, Berkeley Lab's Scientific Networking Division

Indermohan “Inder” Singh Monga, an internationally recognized expert in advanced networking research, is the new executive director of ESnet. He will also assume the role of director of the Scientific Networking Division at Lawrence Berkeley National Laboratory, which manages ESnet.
Monga, who joined ESnet in September 2009, is only the fifth person to lead ESnet since it was created 30 years ago. When Greg Bell announced he was stepping down as ESnet director in February 2016, Monga was named interim director. Since joining the organization, Monga has served as a software engineer, chief technology officer, group lead of the Tools Team and deputy of technology for the Scientific Networking Division. He provides research and technology direction, actively leads research projects and championed building a focused software engineering effort within ESnet. He is also a frequent invited/keynote speaker at industry and research and education (R&E) networking conferences.

Livermore, Berkeley Lead Power Grid Cybersecurity Project

Cybersecurity experts Jamie Van Randwyk of Lawrence Livermore National Laboratory and Sean Peisert of Berkeley Lab are leading a new program to develop new data analysis methods to better protect the nation’s power grid.
The project, “Threat Detection and Response with Data Analytics,” is part of a $220 million, three-year Grid Modernization Initiative launched in January 2016 by the Department of Energy to support research and development in power grid modernization.
The goal of this project is to develop technologies and methodologies to protect the grid from advanced cyber and threats through the collection of data from a range of sources and then use advanced analytics to identify threats and how best to respond to them.

Kathy Yelick Delivers Keynote at 23rd ARITH Symposium

Kathy Yelick, Associate Lab Director for Computing Sciences, delivered the keynote address today at ARITH 2016, the 23rd IEEE Symposium on Computer Arithmetic. Yelick spoke onAntisocial Parallelism: Avoiding, Hiding and Managing Communication” at the three-day symposium in Santa Clara, Calif.

Since 1969, the ARITH symposia have served as the primary and reference conference for presenting scientific work on the latest research in computer arithmetic. Topics include theoretical aspects, number systems, algorithms for operations and math functions, implementations, validation, and applications of computer arithmetic.

NERSC Shares First Intel Xeon Phi Performance Results

NERSC staff presented the first results of running applications on the Intel Xeon Phi Knight's Landing (KNL) architecture at the recent ISC16 conference in Frankfurt, Germany. KNL will form the basis of NERSC's new Cori supercomputer, which will have 9,300 KNL nodes in a Cray XC40 system. Cori KNL hardware will arrive at NERSC this summer.

Next Webinar in 'Best Practices for HPC Software Developers' Series this Thursday

The fifth webinar in the "Best Practices for HPC Software Developers" series will take place 10-11 a.m. Thursday, July 14 (Pacific time). The webinar, entitled "How the HPC Environment is Different from the Desktop (and Why)," will be presented by Katherine Riley from the Argonne Leadership Computing Facility.

The series is a cooperative effort between NERSC, ALCF, OLCF and the IDEAS software productivity project. Participation in prior sessions is not required.

This Week's CS Seminars

»CS Seminars Calendar

Tuesday, July 12

HPX-RTE: A Lightweight Runtime Environment for Open MPI
10-11 a.m., Bldg. 59, Room 4102
Hadi Montakhabi, High Performance Computing, University of Houston, Houston, Texas

High performance computing systems are growing toward hundreds of thousands to million node machines, utilizing the computing power of billions of cores. Running parallel applications on such large machines efficiently will require optimized runtime environments that are scalable and resilient. Multi and many core chip architectures in large scale supercomputers pose several new challenges to designers of operating systems and runtime environments.

ParalleX is a general purpose parallel execution model aiming to overcome the limitations imposed by the current hardware and the way we write applications today. High Performance ParalleX (HPX) is an experimental runtime system for ParalleX.

The majority of scientific and commercial applications in HPC are written in MPI. In order to facilitate the transition from MPI model to ParalleX, there is a need for a compatibility mechansim between the two. This mechanism did not exist before. The goal of this project was to provide a compatibility mechanism for MPI applications to use the HPX runtime system. This is achieved by developing a new runtime system for the Open MPI project, an open source implementation of MPI. We call this new runtime system HPX-RTE.

HPX-RTE is a new, light weight, and open source runtime system specifically designed for the emerging exascale computing environment. The system is designed relying on HPX project advanced features to allow for easy extension and transparent scalability. HPX-RTE provides full compatibility for current MPI applications to run on HPX runtime system. HPX-RTE provides an easy and simple path for transition from MPI to HPX. It also paves the way for future hybrid programming models such as HPX-MPI and integration of more features from HPX into Open MPI.