InTheLoop | 08.27.2012
August 27, 2012
Sudip Dosanjh Named New NERSC Director
On August 21, Associate Lab Director Kathy Yelick announced that Sudip Dosanjh, a leader in extreme-scale computing at Sandia National Laboratories in Albuquerque, has agreed to serve as the next NERSC Division Director. He is planning to start in November 2012.
Dosanjh is currently the group lead for extreme-scale computing at Sandia, and co-led the development of the exascale technology roadmap that was presented at the DOE Architectures and Technology workshop in December 2009. He has given numerous presentations on exascale computing and played a key role in establishing co-design as a DOE strategy for achieving exascale computing. Earlier in his career, Dosanjh worked extensively in developing large-scale parallel scientific applications in areas such as materials modeling, nuclear reactor safety, combustion and heat transfer. Read more.
John Killeen, First NERSC Director, Dies at 87
John Killeen, the founding director of what is now known as the National Energy Research Scientific Computing Center (NERSC), died August 15, 2012 at age 87. Killeen led the Center from 1974 until 1990, when he retired. The Department of Energy conferred its highest honor, the Distinguished Associate Award, on Killeen in 1980 in recognition of his outstanding contribution to the magnetic fusion energy program. Read more.
Supernovae of Similar Brightness, Cut from Vastly Different Cosmic Cloth
Exploding stars called Type 1a supernovae are ideal for measuring cosmic distance because they are bright enough to spot across the Universe and have relatively the same luminosity everywhere. Although astronomers have many theories about the kinds of star systems involved in these explosions (or progenitor systems), no one has ever directly observed one—until now. The event was initially detected by the PTF Real-Time Detection pipeline at NERSC. Read more.
Bringing The Review of Particle Physics Online
Scientists from the international Particle Data Group teamed up with computer engineers in CRD’s Advanced Computing for Science Department to develop a web-based system that supports the PDG collaboration’s workflow, as well as an interactive online version of The Review of Particle Physics, the most complete reference to anything that is relevant in particle physics. Read more.
Speeding the Search for Better Carbon Capture
A computer model that can identify the best molecular candidates for removing carbon dioxide, molecular nitrogen, and other greenhouse gases from power plant flues has been developed by researchers with Berkeley Lab, the University of California Berkeley, and the University of Minnesota, using computational resources at NERSC.
The model is the first computational method to provide accurate simulations of the interactions between flue gases and a special variety of the gas-capturing molecular systems known as metal-organic frameworks (MOFs). It should greatly accelerate the search for new low-cost and efficient ways to burn coal without exacerbating global climate change. Read more.
Financial Firms Support CIFT Assessment of HPC Role in Market Stability
The Center for Innovative Financial Technology (CIFT) at Berkeley Lab has received $100,000 in research donations to study and promote the use of leading edge supercomputing and data intensive science for improving stability, regulation, and enforcement in U.S. markets. The funds were contributed by financial firms. Read more.
IEEE Forms Group to Confront Network Traffic Swells
ESnet’s Greg Bell is quoted in an Institute of Electrical and Electronic Engineers (IEEE) announcement that it is forming a group focused on bringing wired Ethernet speeds up to where they will need to belong by 2015. The IEEE is taking steps to come up with a new Ethernet standard capable of between 400 Gbps and 1 Tbps. Bell discusses the data revolution occurring in science today, and how high-performance scientific networks like ESnet are preparing for it. Read more.
Michael Wehner Answers Questions about Extreme Weather and Climate Change
The public asked—and we answered. In this follow-up “Ask a Scientist” video, Michael Wehner of the Computational Research Division answers questions about droughts, floods, heat waves, and extreme weather in the context of climate change. Wehner uses high performance computing to study extreme weather events in a changing climate. He invited viewers to send in questions in this August 13 video.
Alvarez Fellow Anubhav Jain Featured in DEIXIS Cover Story
Anubhav Jain, a 2011 Luis W. Alvarez Fellow, is featured in the cover story of the current issue of DEIXIS, an annual publication of the DOE Computational Science Graduate Fellowship program (CSGF). The story is titled “Computation Shines in Photovoltaics Search: Anubhav Jain’s Practicum Predicts New Energy-Capturing Materials.” Jain began his work at Berkeley Lab in a 2010 CSGF summer practicum. Read more.
MSRI and Simons Institute Present “Alan Turing: A Centenary Celebration”
The Mathematical Sciences Research Institute (MSRI) and the Simons Institute for the Theory of Computing are presenting “Alan Turing: A Centenary Celebration” on Tuesday, September 4, from 6:00 to 8:30 pm at the Berkeley City College auditorium, 2050 Center Street (near the Downtown Berkeley BART station).
Alan M. Turing (1912–1954) was a mathematician, logician, cryptanalyst, and computer scientist. He formalized the concepts of “algorithm” and “computation” via the Turing machine, providing a blueprint for the electronic digital computer, and is widely considered to be the father of computer science and artificial intelligence.
From 6:00 to 7:00 pm, Andrew Hodges, the author of the acclaimed biography Alan Turing: The Enigma, will give a presentation on our current understanding of Turing’s life, work, and untimely death at the age of 41.
From 7:15 to 8:30 pm, Richard Karp, Founding Director of the Simons Institute for the Theory of Computing, will moderate a panel discussion on the influence of Turing’s work on current research in logic, computer science, complexity, and biology. The panelists will include Martin Davis (Courant Institute), Andrew Hodges (University of Oxford), Don Knuth (Stanford University), Peter Norvig (Google), Dana Scott (Carnegie Mellon University), and Luca Trevisan (Stanford University).
Admission is free, but seating will be limited.
This Week’s Computing Sciences Seminars
Understanding Differences for Better Team Performance
Tuesday, August 28, 11:00 am–4:00 pm, OSF 943-238
Kathy Anttila, Berkeley Lab Learning Institute (BLI)
Research has shown that diverse teams are more effective, creative, and productive than homogenous teams. However, to obtain the best results, diverse team members need to understand and respect each others’ differences and identify unifying bonds that enable the team to progress toward its goals. This session consists of two workshops designed to help all staff explore ways to enhance team interactions and results. Part I focuses on cultural differences, while Part II focuses on personality styles. Contact Elizabeth Bautista to register.
OSF Brown Bag: Using NERSC WebEx
Wednesday, August 29, 12:00–1:00 pm, OSF 943-238
Richard Gerber, NERSC
NERSC has two WebEx accounts that can be used for holding remote meetings or webinars. I’ll discuss some of WebEx’s capabilities and use cases and show you how to reserve and start a meeting.
Solving the GPS Problem in Almost Linear Complexity: LAPACK Seminar
Wednesday, August 29, 12:10–1:00 pm, 380 Soda Hall, UC Berkeley
Shamgar Gurevich, University of Wisconsin, Madison
A client on the earth surface wants to know her geographical location. The Global Positioning System (GPS) was built to fulfill this task. It works as follows. Satellites send to earth their location. For simplicity, the location of a satellite is a bit b=1,-1. The satellite transmits to the earth a sequence of N>1000 complex numbers S,S,...,S[N-1] multiplied by its location b. The client receives the sequence R which is a noisy version of S distorted in two parameters that encode the distance and relative radial velocity of the satellite with respect to the client. The GPS PROBLEM is to calculate the distance and the bit b. A client can compute her location by knowing the locations of at least three satellites and distances to them. The current sequences S which are used are pseudo-random and the algorithm takes O(N^2 logN) arithmetic operations. In this lecture I will explain our recent construction of sequences S that allow a much faster algorithm: it solves the GPS Problem in O(N logN) operations.
This is a joint work with A. Fish (Sydney), R. Hadani (Austin), A. Sayeed (Madison), and O. Schwartz (Berkeley).
Performance Modeling of Service Level Objective-Driven MapReduce Environments
Wednesday, August 29, 3:00–4:00 pm, Soda Hall, Wozniak Lounge, UC Berkeley
Abhishek Verma, UC Berkeley
Several organizations are using MapReduce for efficient large scale data processing such as personalized advertising, spam detection, scientific computation, and data mining tasks. There is a growing need among MapReduce users to achieve different Service Level Objectives (SLOs). Often, applications need to complete data processing within a certain time frame. Alternatively, users are interested in completing a set of jobs as fast as possible. Accurate and efficient performance modeling tools are needed for designing, prototyping, and evaluating new resource allocation and job scheduling algorithms to support these SLOs.
In this presentation, I will talk about performance modeling of MapReduce environments through a combination of measurement, simulation, and analytical modeling for enabling different service level objectives. We propose an analytical performance model based on key performance characteristics measured from past job executions and built a simulator capable of replaying these job traces. To demonstrate the usefulness of our techniques, we apply them to achieve service level objectives such as enabling deadline-driven scheduling, optimizing makespan of a set of MapReduce jobs and benchmarking different hardware.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 7,000-plus scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are Department of Energy Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.