InTheLoop | 03.01.2004
The Weekly Electronic Newsletter for Berkeley Lab Computing Sciences Employees
March 1, 2004
All NERSC Production Systems Now on the Grid
With the new year came good news for NERSC users — all of NERSC's production computing and storage resources are now Grid-enabled and can be accessed by users who have Grid applications using the Globus Toolkit. As a result, Grid users now have access to NERSC's IBM supercomputer ("Seaborg"), HPSS, the PDSF cluster, the visualization server "Escher," and the math server "Newton." Users can also get support for Grid-related issues from NERSC's User Services Group.
"Now that the core functionalities have been addressed, our next push is to make the Grid easier to use and manage, for both end users and the administrators who handle applications that span multiple sites," said Steve Chan, who coordinated the Grid project at NERSC.
One of the major challenges faced by NERSC's Grid team was installing the necessary software on Seaborg, which operates using a complicated software stack. Additionally, the system needed to be configured and tested without interfering with its heavy scientific computing workload. By working with IBM and drawing upon public domain resources, Seaborg became accessible via the Grid in January.
"With Steve Chan's leadership and the technical expertise of the NERSC staff, we crafted a Grid infrastructure that scaled to all of our systems," said Bill Kramer, general manager of the NERSC Center. "As the driving force, he figured out what needed to be done, then pulled together the resources to get it done."
To help prepare users, center staff presented tutorials at both the Globus World and Global Grid Forum, as well as presented training specifically for NERSC users. When authenticated users log into NIM (NERSC Information Management), they are now able to enter their certificate information into the account database and have this information propagated out to all the Grid enabled systems they normally access, superseding the need to have separate passwords for each system. The next step is for NIM to help non-credentialed users obtain a Grid credential.
Because the Grid opens up a wider range of access paths to NERSC systems, the security system was enhanced. Bro, the LBNL-developed intrusion detection system which provides one level of security for NERSC, was adapted to monitor Grid traffic at NERSC for an additional layer of security. The long-term benefits, according to Kramer, will be that NERSC systems are easier to use while security will be even higher.
"Of course, this is not the end of the project," Kramer said. "NERSC will continue to enhance these Grid features in the future as we strive to continue providing our users with the most advanced computational systems and services."
This accomplishment, which also required the building of new access servers, reflects a two-year effort by staff members. Here is a list of participants and their contributions:
Nick Balthaser built the core production Grid servers and has been working on making the myproxy server very secure.
Scott Campbell developed the Bro modules that are able to analyze the GridFTP and Gatekeeper protocols, as well as examine the certificates being used for authentication.
Shane Canon built a Linux Kernel module that can associate Grid certificates with processes, and helped deploy Grid services for the STAR and ATLAS experiments.
Steve Chan led the overall project and coordinated the effort, as well as developing and porting much of the actual software that created these abilities.
Shreyas Cholia built the modifications to the HPSS FTP daemon so that it supports GSI, making it compatible with GridFTP. He has also been working on the Globus File Yanker, a Web portal interface for copying files between GridFTP servers.
Steve Lau works on the policy and security aspects of Grid computing at NERSC.
Ken Okikawa has been building and supporting Globus on Escher to support the VisPortal application.
R.K. Owen, Clayton Bagwell and Mikhail Avrekh of the NIM team have worked on the NIM interface so that certificate information gets sent out to all the cluster hosts.
David Paul is the Seaborg Grid lead, and has been building and testing Globus under AIX.
Iwona Sakrejda is the lead for Grid support of PDSF and developed training for all NERSC users, as well as supporting all the Grid projects within the STAR and ATLAS experiments.
Jay Srinivasan wrote patches to the Globus code to support password lockout and expiration control in Globus.
Dave Turner is the lead for Grid support for Seaborg users. He recently helped a user, Frank X. Lee, move a few terabytes of files from the Pittsburgh Supercomputing Center to NERSC's HPSS using GridFTP.
David Bailey Co-Organizes March 29-30 Workshop on Experimental Math
David Bailey, chief technologist for the NERSC Center and Computational Research divisions, and his frequent collaborator, Jonathan M. Borwein of Dalhousie University in Canada, are co-organizers of an Experimental Math Workshop to be held March 29-30 in Oakland. Co-hosted by Berkeley Lab and UC Berkeley, this workshop is designed to bring together a number of leading researchers and students in the emerging field of "experimental mathematics," namely the utilization of advanced computer technology as an active tool for mathematical research. Topics to be covered include computational number theory, symbolic computing tools, high-precision arithmetic techniques, integer relation algorithms, numerical discovery of new mathematical identities, applications of experimental mathematics in physics and other disciplines, experimental mathematics and art, computational techniques for formal proof, parallel computing for experimental mathematics, and others.
Borwein and Bailey are co-authors of the new books "Mathematics by Experiment: Plausible Reasoning in the 21st Century" and "Experimentation in Mathematics: Computational Paths to Discovery." For more information about the workshop, contact David at DHBailey@lbl.gov.
DSD's Keith Jackson Co-Authors Article on How to Set Up a Grid
Keith Jackson of the Distributed Systems Department, along with Jennifer Schopf of Argonne National Lab, has written an article in the February issue of ClusterWorld magazine. In the piece entitled, "So You Want to Set Up a Grid," Keith and Jennifer discuss some rules of thumb to consider while setting up your own Grid. The article, available only in the print edition, offers the following criteria for a Grid:
* A Grid must coordinate resources that are not subject to centralized control and that cross organizational boundaries.
* A Grid must use standard, open, general-purpose protocols and interfaces.
* A Grid must deliver nontrivial qualities of service.
March 10 Lecture to Examine Performance Characteristics of Modern Supercomputers
Gerhard Wellein of the Regional Computing Center at the University of Erlangen in Germany will discuss "Performance Characteristics of Modern Supercomputers" in a talk starting at 11 a.m. Wednesday, March 10, in Bldg. 50A, room 5132. The talk is hosted by the Future Technologies Group in the Computational Research Division.
Here's the abstract for the presentation:
"In the past five years the supercomputing community encountered some fundamental changes. Some processor manufacturers announced the discontinuation of their processor series; others have launched a completely new architecture. A summary of theoretical performance numbers and important architectural features of recent processor technology (IBM Power4, NEC SX6, Intel Xeon/Itanium2, Hitachi SR8000, CRAY X1) in contemporary supercomputers is presented.
"The performance characteristics of these systems are discussed using a variety of application programs, ranging from serial codes in quantum physics and quantum chemistry to parallel CFD applications. The focus is on CFD applications where performance numbers and optimization techniques for 'commodity-off-the-shelf' and 'tailored' HPC systems are compared. In this context we further demostrate that Intel Itanium2 has become a competitive architecture that is ready to rival IBM's Power4 series."
Incident Response Course to Be Taught March 11
What should you do if your computer is hit with a security-related incident such as a hacker attack or worm infection? Come to a free course offered by the Computer Protection Program from 9 a.m. to 2 p.m. on Thursday, March 11 and find out. A course description is at http://www.lbl.gov/ITSD/Security/services/course-catalog.html#sysad7. Enrollment is limited — visit http://hris.lbl.gov/ to sign up.
Windows Users — Deal Wisely (and Carefully) with Email
Opening attachments is getting increasingly dangerous. After worms infect systems they glean user addresses from address books and then send messages with infected attachments to these addresses. Users who open infected attachments are very likely to infect their own systems. Your best move is to avoid opening attachments that you are not expecting — even if the message that contains the attachment appears to be from someone you know. If you think you should open the message because it appears to come from someone you know well, contact that person first to see if that person actually sent it.
PNNL Issues Call for Environmental Molecular Science Computational Proposals
The Molecular Science Computing Facility (MSCF) at Pacific Northwest National Laboratory has issued a call for proposals for allocations of computer time for Computational Grand Challenge Applications (CGCA) in environmental molecular science research areas that address the environmental problems and research needs facing DOE and the nation. This call includes research applications in biology, chemistry, climate and subsurface science and is open to all research entities regardless of research funding source. Computer allocations for CGCAs are for three years, with the computer allocation appropriate for the scope of research to be performed.
A letter of intent is due by Friday, April 16, 2004, with final proposals due Saturday, May 31, 2004. Announcement of awards are expected September 1, 2004. Further information on the CGCA call for proposals is available at http://www.emsl.pnl.gov/docs/tms/mscf/proposals/. The MSCF houses an 11.8 teraflop/s HP Linux Itanium2 Cluster with 1,960 processors. The MSCF also has a suite of computational chemistry software (Molecular Science Software Suite) designed to take advantage of the new HP. In particular, the call is seeking research applications that will use a significant portion of this computational resource. The William R. Wiley Environmental Molecular Sciences Laboratory (EMSL, http://www.emsl.pnl.gov/) is a user research facility funded by DOE's Office of Biological and Environmental Research.
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.