InTheLoop | 06.28.2010
June 28, 2010
Horst Simon Is Interviewed for “Modeling the World” Website
Associate Lab Director Horst Simon was interviewed by Microsoft Technical Computing for their Modeling the World website. In a short video excerpt on Facebook, Simon discusses the inevitability of parallelism, a new Moore’s Law, and the challenges to come — like programming a billion cores. In the longer video (seven minutes), he discusses his personal fascination with computers and the applications they are being and will be used for.
Four Computing Magazines Interview Kathy Yelick and John Shalf
In the article “Is cloud computing fast enough for science?” Federal Computer Week magazine discusses early results from DOE’s Magellan cloud computing testbed with NERSC Director Kathy Yelick. “For the more traditional MPI applications there were significant slowdowns, over a factor of 10,” Yelick said. But for computations that can be performed serially, such as genomics calculations, there was little or no deterioration in performance in the cloud. Yelick summarized the goals of the research:
We’re really looking at this as an alternative to scientists buying their own clusters. Our goal is to inform DOE and the scientists and industry what is the sweet spot for cloud computing in science; what do you need to do to configure a cloud for science, how do to manage it, what is the business model, and do you need to buy your own cloud.
Yelick discusses cloud computing in more detail in “Uncovering Results in the Magellan Testbed” in HPC in the Cloud.
EnterTheGrid/Primeur Magazine, a European online magazine for high performance computing and networking, interviewed John Shalf, head of NERSC’s Advanced Technologies Group, at ISC’10 in Hamburg. The resulting article, titled “It takes three to tango in exascale computing: memory, photonic interconnects and embedded processors,” discusses the Green Flash project and the future of supercomputing from the hardware side. On the software side, Shalf discussed native parallel programming languages with International Science Grid This Week in “Q & A — John Shalf talks parallel programming languages.” One of Shalf’s key points:
Many programmers still think in terms of converting serial programs to run in parallel, but that approach can be very limiting because the best parallel algorithms are completely different from their serial counterparts. Parallel languages can make it easier to write parallel algorithms and the resulting code will run more efficiently because the compiler will have more information to work with.
Berkeley Team Wins Best Paper at Cloud Computing Conference
A team of researchers from Berkeley Lab has received the Best Paper Award at ScienceCloud 2010, the first Workshop on Scientific Cloud Computing sponsored by the Association for Computing Machinery. The paper, “Seeking Supernovae in the Clouds: A Performance Study,” was written by Keith Jackson and Lavanya Ramakrishnan of the Advanced Computing for Science Department, Karl Runge of the Physics Division, and Rollin Thomas of the Computational Cosmology Center. Read more.
NERSC and HDF Group Optimize HDF5 Library to Speed Up I/O
There are several layers of software that deal with I/O on high performance computing systems. Getting these layers of software to work together efficiently can have a big impact on a scientific code’s performance. That’s why NERSC has partnered with the nonprofit Hierarchical Data Format (HDF) Group to optimize the performance of the HDF5 library on modern HPC platforms. Read more.
Building 50D Demolition Is Beginning Soon
Building 50D, the modular unit that formerly housed several CRD and NERSC groups, will be demolished as the last phase of the Building 50 complex seismic upgrade project. Demolition will begin within the next week and will take seven to eight weeks. There will be a tent-like barricade around 50D, and the adjacent vending machines will be out of service for the duration of the demolition. The third floor hallway doors will remain open, allowing access to the outdoor stairs leading up to the auditorium and the H2 parking lot. The outdoor stairway from the vending machines to 50F will be blocked off at the top.
Shreyas Cholia Giving Talk on Web Gateways at SciPy 2010
Shreyas Cholia of NERSC is giving a talk on “Building Web Gateways to Scientific Computing in Python” at SciPy 2010, the Python for Scientific Computing Conference being held today through Saturday, June 28–July 3, in Austin, Texas. He will discuss how Python and Python-based frameworks have played an active role in the development of NERSC’s Science Gateways, including the Deep Sky and Palomar Transient Factory sky surveys, the NERSC Web API, and the 20th Century Reanalysis Project for climate modeling. These efforts engage various layers of the scientific Python ecosystem.
Ergonomic Tip: Gmail Keyboard Shortcuts
If you prefer to use the keyboard instead of the mouse for common email tasks, consider enabling keyboard shortcuts in the Gmail web interface. Keyboard shortcuts allow you to perform many common tasks like creating a new message, replying, archiving, and going to the next message without using the mouse. The Lab’s Gmail Help Center has posted an FAQ page that gives all the information you’ll need.
Apply Now for Participation Grants to Attend SC10
SC10, planned for November 13-19, 2010 in New Orleans, LA, is the leading international conference on high performance computing (HPC), computational science, and networking. SC10 will continue the SC Communities initiative, which includes its successful Broader Engagement (BE) program. BE seeks to open the door to SC for groups that have traditionally been under-represented in computing (African-Americans, Hispanics, Native Americans, Alaska Natives, Native Hawaiians, Pacific Islanders, the physically challenged, and women).
Opportunities for students, faculty and professionals include travel support through the BE program. Additional SC Communities programs provide travel support for educators, information for international attendees, and a volunteer program for those interested in helping to make SC10 a success. Visit the SC Communities web page and sign up now to participate.
This Week’s Computing Sciences Seminars
Toward Smart-Tuned Sparse Iterative Solvers for Post-Petascale Computing
Monday, June 28, 1:00–2:00 pm, 50B-4205
Serge G. Petiton, CNRS-LIFL
The High Performance Computing recent evolution leads to the development of new innovative architectures. New computers would be built as large, hierarchical, and heterogeneous architectures, integrating cloud and different programming paradigms to push forward the frontiers of High Performance Computing. From large granularity computing to the optimization of multi-core processors and modern accelerators, the end-users and software engineers will face new programming challenges. Expertise on all these different topics and their interactions toward large integrated solutions are the sine qua non conditions for any future efficient post-petascale computing, on the road to the exascale.
In this talk, we propose a survey of our recent research results related to sparse iterative methods and their programming paradigms. In particular we present analyses of the accuracy of the computation of Krylov basis on IBM Cell and GPU processors, we present a new auto-tuned subspace size algorithm for the GMRES method for complex-element matrices, we present performance evaluation with respect to sparse formats (and, then, format auto-tuned matrix-vector multiplication algorithm). Then, we discuss the perspective of smart-tuning for hybrid iterative methods for future exascale computing.
We conclude with the importance of new programming paradigms and languages, such as YML. We survey several research directions to contribute to this quest toward the next computing frontier.
Multiple Implicitly Restarted Arnoldi Methods to Solve Eigenproblem
Tuesday, June 29, 1:00–2:00pm, 50F-1647
Nahid Emad, PRiSM Laboratory, University of Versailles, France
The restarted Arnoldi methods (RAMs) allow computing some eigenpairs of large sparse non Hermitian matrices. However, the size of the subspace in these methods is chosen empirically. A poor choice of this size could lead to the non-convergence of the method. We propose a technique to remedy to this problem. This approach, called multiple restarted Arnoldi method (MRAM) is based on the projection of the problem on several Krylov subspaces instead of a single one. MRAM allows updating the restarting subspace of a RAM by taking into account the interesting eigen-information obtained in all subspaces. We present a general framework for this kind of method and an adaptation of its concept to the explicitly/implicitly restarted Arnoldi method (ERAM/IRAM). We focus then on a particular case of MIRAM with nested subspaces (MIRAMns).
Identity and Collaboration at Scale: Federations, Attributes and Science
Thursday, July 1, 11:30 am–1:00 pm, 50A-5132
Ken Klingenstein, Director, Internet2 Middleware and Security
In the past several years, Internet identity has grown exponentially, with federated and social identity deployments rapidly becoming a new layer of global trust infrastructure. Now the other half of the collaboration dynamic — access controls — is beginning to develop, and with it, the ability to significantly increase the effectiveness of research and science. This talk will begin by discussing federations, including the US R&E federation InCommon, and the attributes that it transports. It will cover new InCommon services, including higher levels of assurance and personal certificates. It will describe how these developments are being harnessed into collaboration management platforms that “domesticate” domain and collaboration applications. The use of these platforms is now beginning in some notable national and international virtual organizations, and early results will be discussed.
Link of the Week: Visualizing the BP Oil Spill
Ifitwasmyhome.com provides up-to-date information on the BP oil spill as well as an interactive map that lets you move the outline of the oil spill anywhere in the world to understand how big it is. For example, if the spill had occurred in Berkeley (and California were underwater), as of this morning it would stretch far into the Pacific Ocean, south to San Mateo, southeast to Modesto, northeast to Carson City, Nevada, and north to Ukiah, Willows, and Oroville. Center the spill on New York City and it stretches from central Pennsylvania to Rhode Island.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.