InTheLoop | 06.16.2014
CRD’s McParland to Retire after 41 Years of Advancing Scientific Data Collection
Ever since he built a radio telescope in his backyard as a kid in Chicago, Chuck McParland’s interest in science and technology has taken his work to the ends of the Earth and the depths of space. But after four decades and a year at UC Berkeley and then Berkeley Lab, McParland plans to officially retire at the end of June. Starting at the Bevatron where data was input on punch cards and output was recorded on paper tape, McParland has been involved in the design of data collection and analysis systems for a satellite, the IceCube neutrino experiment buried deep beneath the ice of Antarctica, smart meters, and more. “The lab has always been an amazing place to work,” McParland said, looking back on his time here. “There’s never been a shortage of interesting and challenging projects.” »Read more
Simulations Show 'Erratic' Laser Light Okay for Next-Gen Particle Accelerators
Laser light needn’t be as precise as previously thought to drive new breed of miniature particle accelerators, Berkeley Lab researchers have found by running simulations at NERSC. A new study shows that certain requirements for building plasma wakefield particle accelerators, an emerging type of small-area particle accelerator, can be significantly relaxed. Guided by theory and using computer simulations to test various scenarios, the researchers looked at how laser beams of various colors and phases—basically a hodgepodge of laser light—affect plasma in a wakefield accelerator. They found little difference in how the plasma responded, no matter the light, suggesting that laser beams chained together in "stages" to create longer paths (and thus faster particles) needn't be as exacting as previously thought. Their research is presented as the cover story in the May special issue of Physics of Plasmas. »Read more
Reminder: NERSC Exascale Science Application Program Accepting Applications Through June 30
The National Energy Research Scientific Computing (NERSC) Center is now accepting applications for the NERSC Exascale Science Application Program (NESAP). »Applications are due June 30, 2014.
Through NESAP, NERSC will partner with approximately 20 application teams to help prepare codes for the Cori architecture. A key feature of the Cori system is the Intel Knights Landing processor which will have over 60 cores per node with multiple hardware threads on each core, and will also include high bandwidth, on-package memory. The program will partner application teams with resources at NERSC, Cray, and Intel, and will last through the acceptance of the Cori system. »Read more.
Popular NERSC Nobel Keynote Lectures Available Online
NERSC's 40th anniversary speaker series (featuring Nobel prize-winning work the center helped support) was a great success. The speakers, including three Nobel Laureates, drew standing room only crowds, hundreds watched the live stream and many more participated through social media.
Francesca Verdier was a driving force behind the series, recruiting in-demand speakers and working with Kathy Kincade, who coordinated publicity, video and social media coverage. Linda Vu and public affair's Kelly Owen organized and executed the successful social media campaign around the series. Margie Wylie took photos. Zaida McCunney provided invaluable logistical support, including site-access for visitors, transportation from OSF to the lab for NERSC employees and the refreshments. Lastly, the lab's AV team was helpful and responsive with quick fixes and fast turnaround on video.
»Links to video of the first three lectures is now available on the NERSC web site. Links to Saul Perlman's talk and slides will be posted on the same page as they become available later this week.
This Week's CS Seminars
Bamboo - Automatically Masking Communication Overheads via Reformulating MPI Code into a Latency-tolerant and Data-driven Form
Thursday, June 19, 9:30am - 11:00am, Bldg. 70, Room 191
Tan Nguyen. University of California, San Diego
As current technological advances continue to focus on node architecture, the rate of computation is in turn expected to maintain its upward trend. However, communication remains one of the largest obstacles to scaling applications across thousands of computing nodes. We reinterpret Message Passing Interface (MPI), a standard library for distributed-memory programming, to execute MPI applications under an asynchronous model that can hide communication overheads via overlapping communication with computation. To automate this process, we present Bamboo, a source-to-source translator that transforms MPI source code into a task graph formulation that executes in a data-driven fashion under the control of a runtime system. We validate Bamboo using a variety of applications classified into 3 HPC motifs: Dense Linear Algebra, Structured and Unstructured Grids. We show that for this range of application classes we can hide latency on a wide range of platforms including traditional clusters of multicore processors, as well as platforms based on accelerators (GPUs) and coprocessors (Intel's MIC). In all cases Bamboo realizes a significant reduction in communication delays with only modest amounts of user annotation.
Matching Data Intensive Applications and Hardware/Software Architectures
Thursday, June 19, 11:00am - 12:00pm, Bldg. 50A, Room 5132
Geoffrey Charles Fox, Associate Dean for Research and Distinguished Professor of Computer Science and Informatics, Indiana University Bloomington
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However the same is not so true for data intensive problems even though commercial clouds presumably devote more resources to data analytics than supercomputers devote to simulations. We try to establish some principles that allow one to compare data intensive architectures and decide which applications fit which machines and which software. We use a sample of over 50 big data applications to identify characteristics of data intensive applications and propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from clouds to HPC. Our software analysis builds on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS. We illustrate issues with examples including kernels like clustering, and multi-dimensional scaling; cyberphysical systems; databases; and variants of image processing from beam lines, Facebook and deep-learning.
Will the cloud give birth the to the first real artificial intelligence? According to Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology and a member of ESnet's advisory board, the answer is yes: “We’re seeing a rebirth of artificial intelligence driven by the cloud, huge amounts of data and the learning algorithms of software,” he is quoted saying in the New York Times "Bits" blog. »Read more.