InTheLoop | 01.14.2008
The weekly newsletter for Berkeley Lab Computing Sciences
January 14, 2008
Time’s Up: NERSC Shuts Down Seaborg
Just after 2 p.m. last Friday, January 11, the NERSC staff shut down Seaborg, the 6,600-processor IBM supercomputer that had been the center’s workhorse since August 2001. In all, the system delivered more than 250 million processor-hours of computing time to NERSC users. When inaugurated, Seaborg was ranked at number 2 on the TOP500 list, but had slipped to number 331 on the most recent list.
Symposium Seeks Papers on “Programmability versus Performance”
High Performance Computer Science Week (HPCSW) is designed to bring together computer and computational scientists to address the present and future issues and challenges in high performance computing. The first annual HPCSW will be held at the Grand Hyatt Hotel in Denver, Colorado from Monday through Saturday, March 31–April 5, 2008. The Keynote speaker will be Tony Hey, Corporate Vice President of the External Research Division of Microsoft Research.
HPCSW seeks to foster discussions on the computer science that lies between simulation/data-centric codes and the hardware upon which they run. There will be two related events at the conference: a principal investigator meeting for those involved in computer science within the Department or Energy Advanced Scientific Computing (ASCR) Research Office, and a symposium open to all computer and computational scientists. For those involved in the international LinuxBIOS research, there will be a final workshop on April 5.
This year’s HPCSW symposium seeks invited paper sessions, workshops, tutorials and posters on theme of “Programmability versus Performance.” For many years the barrier to widespread use of parallel computers has been the special knowledge necessary to effectively program them. “It’s the software, stupid” has been the mantra of scientific computing and the recommended focus of numerous national studies since the dawn of parallel computing. Even now, scientific computing is alive and well on the “low end” in desktop and small local servers — and on the “high end” at big institutions with the institutional resources to devote to programming. However, the “missing middle” has never thrived, as the entry price and knowledge has been too high.
Processor and hardware architecture changes are now upon us that will make this problem even worse. Multi/many-core processors offer significant on-chip parallelism. Special-purpose processors (FPGAs, GPUs, accelerators) are appearing within systems, increasing their heterogeneity and even showing up on the processor die. The scientific programming gap between the “haves” and the “have-nots” will surely increase without significant effort devoted to broader abstract machine models and application programming interfaces. Even on the high end, there is a renewed need for simplified programming of many-core and heterogeneous architectures with the advent of the new generation of NSF and DOE leadership class machines. HPCSW seeks contributions in programming idioms and languages, operating system and runtime software, and algorithms.
Computing Community Consortium Requests Visioning Proposals
The Computing Community Consortium (CCC) is continuing its request for Visioning Proposals for computing research. The CCC was created by the Computing Research Association (CRA) and is funded under a cooperative agreement from the National Science Foundation (NSF). The purpose of the CCC is to catalyze the computing research community to debate more audacious research challenges; to build consensus around major, long-term research directions; to articulate research agendas; to evolve the most promising visions toward clearly defined initiatives; and to work with funding organizations to move those initiatives toward funded programs.
Proposals may be submitted at any time. The CCC Council will review them on at least a quarterly basis. Anyone interested but not yet ready to respond to the RFP can send a simple statement of interest to firstname.lastname@example.org, including topic area and other relevant information, as a placeholder to give the CCC an idea of the breadth of areas and of interests.
Sapphire Data Mining System Is Topic of Seminar on Friday
Chandrika Kamath of the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory will give a talk on her Sapphire data mining system (which recently received an R&D100 award) on Friday, January 18, from 1:00 to 2:00 pm in conference room 50A-5132. The title of the talk is “Scientific Data Mining: Challenges at the Petascale.”
Here is the abstract:
“The data from scientific simulations, observations, and experiments is now being measured in terabytes and will soon reach the petabyte regime. The size of the data, as well as its complexity, make it difficult to find useful information in the data. This is of course disconcerting to scientists who wonder about the science still undiscovered in the data. The Sapphire scientific data mining project at Lawrence Livermore National Laboratory, has been addressing this concern by applying data mining techniques to problems ranging in size from a few megabytes to a hundred terabytes in a variety of domains. Using example problems from astronomy, fluid mixing, remote sensing, and experimental physics, I will describe our solution approaches and discuss some of the challenges we have encountered in mining these datasets.”
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.