A-Z Index | Phone Book | Careers

InTheLoop | 03.23.2015

March 23, 2015

ESnet to Demonstrate Science DMZ as a Service, Create Virtual Superfacility at Conference

At the twenty-second GENI Engineering Conference being held March 23-26 in Washington, D.C., ESnet Chief Technologist Inder Monga will lead a demonstration of the Science DMZ as a service and show how the technique for speeding the flow of large datasets can be created on demand. The demonstration, which leverages DOE and NSF research areas, will be presented during the conference plenary session on Wednesday morning. The conference is tailor-made for the demonstration as GENI, the Global Environment for Network Innovations, provides a virtual laboratory for networking and distributed systems research and education. »Read more.

Ushizima Helps Girls Bridge the STEM Gender Gap

Between her responsibilities as a staff scientist, deputy lead for the Data Analytics and Visualization Group at Lawrence Berkeley National Laboratory and a data science fellow at the Berkeley Institute for Data Science at UC Berkeley, it seems Daniela Ushizima would look forward to time away from computers and coding. But around once a month, she spends a full day working with young women teaching them about computers, coding, robotics and careers in STEM at the Oakland chapter of Black Girls Code (BGC). 

"It is not unusual these days to be concerned with the education of underserved girls in very distant parts of the world," says Ushizima. "But let’s make sure we do not forget the girls right under our noses." She continues, "BGC gives me a soothing hope that this generation of women can bridge the gender inequality gap currently ruling STEM-based workforce in several workplaces." »Read more.

Peisert Compiles ASCR Workshop Report on Securing Scientific Computing Integrity

Sean Peisert of CRD’s Integrated Data Frameworks Group has published a report from a DOE workshop on ASCR Cybersecurity for Scientific Computing Integrity. Peisert co-chaired the workshop held Jan. 7-9 in Rockville, Md. CRD Director David Brown and ESnet’s Brian Tierney were members of the workshop organizing committee. Eric Roman of CRD and Scott Campbell of NERSC participated in the workshop.»Read more.

»Download the report (PDF | 2.5MB).

Lab to Host March 24–25 Meeting of ESnet Site Coordinators Committee

ESnet’s Site Coordinators Committee, or ESCC, will hold its winter meeting at Berkeley Lab on March 24-25. The ESCC is composed of representatives from the ESnet-connected sites. These site coordinators make or approve requests to ESnet for any changes to the site’s network. Two to three times a year, the site coordinators meet as a committee to get updates on ESnet plans and activities. Site coordinators also give short presentations on ESnet-related work at their home institutions.

Apply by April 3 for Argonne Training Program on Extreme-Scale Computing, ATPESC 2015

Doctoral students, postdocs, and early career computational scientists and engineers are encouraged to apply to the Argonne Training Program on Extreme-Scale Computing (ATPESC). Held August 2-14, 2015, this intensive two-week course is designed to train the next generation of supercomputer users. The deadline to apply April 3. The program is free and domestic airfare, meals and lodging are provided for the 60 accepted participants.  »Learn more and apply.

InsideHPC Features Antypas Video

InsideHPC recently featured a video of NERSC Deputy for Data Science Katie Antypas presenting at the 2015 OFS Developer’s Workshop. Her talk was entitled "Preparing the Broad DOE Office of Science User Community for Advanced Manycore Architectures – and some implications for the interconnect." As part of the discussion, Antypas describes how NERSC is readying code for the Cori supercomputer based on Intel Knights Landing. »Read more.

This Week's CS Seminars

»CS Seminars Calendar

Towards better symmetric indefinite factorizations (in shared memory)

Monday, March 23, 2 - 3pm; Bldg. 50F, Room 1647
Jonathan Hogg, Science and Technology Facilities Council (STFC), United Kingdom

Fast and stable symmetric indefinite (A=LDL^T) factorizations are key to many areas of mathematics. Major use cases include 2D finite element modelling and interior-point method based optimization. Current solvers make a choice between being fast and being stable. We describe our recent research into new methods that achieve both at once. This includes alternatives to the inherently serial Hungarian matching-based preprocessing techniques, communication avoiding pivoting techniques and innovations to allow speculative execution of tasks.

Improving Multifrontal Solvers by Means of Block Low-Rank Approximations

Tuesday, March 24, 11am - 12pm; Bldg. 50B, Room 4205
Theo Mary, Institut National Polytechnique de Toulouse

Matrices coming from elliptic Partial Differential Equations (PDEs) and in particular Poisson equations have been shown to have a low-rank property : well defined off-diagonal blocks of their Schur complements can be approximated by low-rank products. Given a suitable ordering of the matrix which gives to the blocks a geometrical meaning, such approximations can be computed using an SVD or a rank-revealing QR factorization. The resulting representation offers a substantial reduction of the memory requirement and gives efficient ways to perform many of the basic dense linear algebra operations. Several strategies, mostly based on hierarchical formats, have been proposed to exploit this property. We study a simple, non-hierarchical, low-rank format called Block Low-Rank (BLR), and explain how it can be used to reduce the memory footprint and the complexity of sparse direct solvers based on the multifrontal method. We present experimental results that show that even if BLR based factorizations are asymptotically less efficient than hierarchical approaches, they still deliver considerable gains. The BLR format is compatible with numerical pivoting, and its simplicity and flexibility make it easy to use in the context of a general purpose, algebraic solver. In this talk, we show how Block low-rank feature was incorporated in the general purpose solver MUMPS and provide experiments showing the gains obtained with respect to standard (full rank) MUMPS factorizations.

Dynamic Kernel Scheduler (DKS) - a thin software layer between host application and hardware accelerators

Tuesday, March 24. 1 - 2pm; Bldg. 50F, Room 1647
Uldis Locans, ETH Zurich

Emerging processor architectures such as GPUs and Intel MICs pro- vide a huge performance potential for high performance computing. However developing software using these hardware accelerators introduces additional challenges for the developer such as exposing additional parallelism, dealing with different hardware designs (Nvidia GPUs, AMD GPUs and APUs, Intel MICs) and using multiple development frameworks in order to use devices from different vendors (CUDA, OpenCL, OpenACC, OpenMP).

Dynamic Kernel Scheduler (DKS) was developed in order to provide a software layer between host application and different hardware accelerators. DKS handles the communication between the host and the hardware accelerators, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library are written in CUDA, OpenCL and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm.

Boxfish: Mapping Performance Data and Visualizations + Visualizing application semantics in performance tools

Thursday, March 26, 12 - 1pm; Bldg. 50F, Room 1647
Katherine Isaacs, University of California, Davis
Todd Gamblin, Lawrence Livermore National Laboratory

We collect vast amounts of data about the execution of our applications in order to optimize them, thereby making efficient use of our computing resources. However, analyzing this data is difficult, in part because the context in which it is informative differs from the context in which it is collected. Boxfish is a visualization platform which manages the relationships among contexts, allowing analysts to explore their data in the terms of the multiple factors affecting overall performance. In this talk, I will go over the key features of Boxfish and discuss a few ways it has been used to understand performance data. In addition to Boxfish, LLNL is developing a number of other contextual performance visualization tools. I will discuss Ravel and MemAxes: tools for visualizing distributed-memory MPI traces and on-node memory performance data, respectively. As with Boxfish, these tools project performance data onto the application domain. However, both tools take special steps to generate application views. Ravel performs analysis on MPI event traces to identify logical time steps and to restore structure in views of communication patterns. MemAxes implements specialized measurement techniques to associate memory references with mesh elements at runtime. I will discuss these techniques, as well as ongoing work at LLNL to simplify the task of incorporating application semantics in performance analysis tools.