A-Z Index | Phone Book | Careers

InTheLoop | 06.23.2014

June 23, 2014

Berkeley Lab Named Intel Parallel Computing Center

Lawrence Berkeley National Laboratory has been named an Intel Parallel Computing Center (IPCC), a collaboration with Intel aimed at adapting existing scientific applications to run on future supercomputers built with manycore processors. Such supercomputers will potentially have millions of processor cores, but today’s applications aren’t designed to take advantage of this architecture.

The Berkeley Lab IPCC will be led by Nick Wright of the National Energy Research Scientific Computing Center (NERSC), and Bert de Jong and Hans Johansen of the Computational Research Division (CRD). »Read more

Deadline: NERSC Exascale Science Application Program  Applications Due Monday

The National Energy Research Scientific Computing (NERSC) Center is accepting applications for the NERSC Exascale Science Application Program (NESAP) through Monday, June 30.

Through NESAP, NERSC will partner with approximately 20 application teams to help prepare codes for the Cori architecture. A key feature of the Cori system is the Intel Knights Landing processor which will have over 60 cores per node with multiple hardware threads on each core, and will also include high bandwidth, on-package memory. The program will partner application teams with resources at NERSC, Cray, and Intel, and will last through the acceptance of the Cori system. »Read more.

Register for Virtual InterLab, Summer 2014

NREL is hosting a free webinar about interlab collaboration starting at 9am and running until 12pm PDT this Thursday, June 26. Space is limited, so »reserve your webinar seat now by registering now (see link below). After registering you will receive a confirmation email containing information about joining the webinar. Talks include the following:

Enterprise Drupal at SLAC

Enterprise Drupal CMS: 2000 Websites, 5 pros, 1 platform

Presenter: John Doumani

Content Management Systems are generally good for helping end users manage the content of their site(s) but how does an organization create, manage, and support potentially thousands of websites? SLAC will share our experience with the recent implementation of an enterprise Drupal 7 platform that allows us to centrally manage all external and internal web properties created using the platform. We will walk through the challenges as well as the wins.
Topics include:
- Project Overview and Objectives
- Infrastructure and Architecture
- Site Provisioning Tool (Aegir v2.0)
- Site Types (Installation Profiles)
- Site Features

Web Collaboration at Sandia

Presenter: Hope Niblik

Collaborative tools are considered essential to success in large enterprises. Why that is true is not necessarily fully understood. Often, the employees do not adopt the tools as expected and the tools end up lightly used or entirely ignored.

Successful implementations have been studied and what is becoming clear is that up-front planning and understanding why the tools exist is essential to success. Knowing the audience and purpose of the tools is a start. Being able to make clear what the individual benefits will be and getting a critical mass of users on board is also important. If you’re hoping to build a community, having an active community manager and support system in place makes the difference and leads to thriving digital communities.

What is Sandia doing towards these goals? Let’s talk about where we’ve failed and succeeded, and where we think we’re headed. »Register now.

This Week's CS Seminars

»CS Seminars Calendar

Procmon: A scalable workload analysis system for the extreme data era

Monday, June 23, 12:30pm–1:30pm, 943-238 (NERSC OSF Conf. Rm. 238)

Douglas Jacobsen, Ph.D., NERSC Bioinformatics Computing Consultant, National Energy Research Scientific Computing Center | Joint Genome Institute

The composition, resource consumption, and bottlenecks of data intensive calculations vary greatly from those of traditional high performance computing workloads.  In the particular case of bioinformatics calculations, a single analysis may start many different scientific analysis executables employing a variety of (or no) strategies for parallel computation, interleaved with file conversion and data transfer operations, all of which are manipulated and controlled by complex scripts.  We have developed the distributed procmon system to identify the components and characterize the performance and performance variance of all the running processes on a Linux-based large-scale computational platform.  The procmon system collects sampled process statistics and file access information of a job, associating the observed process data with relevant job-context information.  The system is designed to interfere as little as possible with the performance of the running jobs and uses less than 0.03% CPU-time of a typical job, has minimal memory and network bandwidth requirements while providing incredibly detailed resolution job data.  The use of an AMQP-based messaging system offers a great deal of flexibility for data collection, and even enables live monitoring of all running jobs, or targeted subsets of running jobs.  The recent addition of an integrated analytic framework has enabled NERSC to begin gaining new insights into some of the exceptionally complex data intensive workloads running on our bioinformatics cluster.

High-Performance Machine Learning

Wednesday, June 25, 2014 Time: 11:00am - 12:00pm, Bldg. 50B, Room 4205

Alexander Gray, Skytree Inc. and Georgia Institute of Technology

How do you get to high-performance machine learning, achieving truly best-in-class results against the competitors in your application area? For example, how do large organizations with decades of experience in analytics such as global finance institutions or international astronomy projects achieve ultra-high detection rates, looking for ultra-high-value needles in vast haystacks? In this talk we will summarize the essential ingredients needed to create such state-of-the-art systems, including choices regarding cluster designs, data preparation, machine learning methods, and computational approaches. To illustrate the main points, examples of machine learning computations on massive data will be demonstrated throughout the talk. Dr. Gray is Chief Technology Officer at Skytree Inc. and Adjunct Associate Professor in the College of Computing at Georgia Tech. His work has focused on algorithmic techniques for making machine learning tractable on massive datasets. He has been working with large-scale scientific data for over 20 years, starting at NASA's Jet Propulsion Laboratory in its Machine Learning Systems Group. He recently served on the National Academy of Sciences Committee on the Analysis of Massive Data, as a Kavli Scholar, and a Berkeley Simons Fellow, and is a frequent adviser and speaker on the topic of machine learning on big data in academia, science and industry.

Overview of the Exascale Challenges and ASCR's Exascale Program

Friday, June 27, 11:00am – 12:30pm, Bldg. 50A, Room 5132

John Shalf, Computer and Data Sciences Department, Lawrence Berkeley National Laboratory

This talk will kick off a multi-week lecture series to describe ongoing research in preparation for HPC systems for the coming decade. This has been typically referred to as "Exascale" but the changes are affecting computing technology at ALL scales. The talk will provide an overview of the challenges posed by the physical limitations of the underlying silicon based CMOS technology, introduce the next generation of emerging machine architectures, and the anticipated effect on the way we program machines in the future. Background: For the past twenty-five years, a single model of parallel programming (largely bulk-synchronous MPI), has for the most part been sufficient to permit translation of this into reasonable parallel programs for more complex applications. In 2004, however, a confluence of events changed forever the architectural landscape that underpinned our current assumptions about what to optimize for when we design new algorithms and applications. We have been taught to prioritize and conserve things that were valuable 20 years ago, but the new technology trends have inverted the value of our former optimization targets. The time has come to examine the end result of our extrapolated design trends and use them as a guide to re-prioritize what resources to conserve in order to derive performance for future applications. This talk will describe the challenges of programming future computing systems. It will then provide some highlights from the search for durable programming abstractions more closely track track emerging computer technology trends so that when we convert our codes over, they will last through the next decade.