A-Z Index | Directory | Careers

LBNL’s DataMover Reaches Milestone with Automated Transfer of 18,000 Files in a Single Request

November 30, 2004

Amidst the hype and hoopla at the recent SC2004 conference in Pittsburgh, Lawrence Berkeley National Laboratory’s Scientific Data Management Research Group demonstrated the robustness of the group’s DataMover by putting the application through its workaday paces. In doing so, the group reached a milestone when, with a single request, 17,870 data files were moved seamlessly from Brookhaven National Lab in New York to LBNL, both of which are operated by the U.S. Department of Energy.

What made the transfer significant was that it was steered by LBNL’s Eric Hjort from the conference in Pittsburgh, and that the number of files moved was the highest ever. But it was just another day of moving data for Hjort, who oversees the moving of files generated at Brookhaven’s STAR experiment to the High Performance Storage System (HPSS) at the National Energy Research Scientific Computing (NERSC) Center at LBNL. DataMover automates all aspects of the transfer once the user determines which directory and all its files are to be moved and where they will be moved to.

Once the application starts, DataMover communicates with the source and target hierarchical resource managers (HRMs) at BNL and NERSC. The HRMs are Grid middleware components developed by the Scientific Data Management Research Group that manage staging and archiving of files from/to HPSS. The DataMover extracts the directory structure from the source HPSS through HRM, generates the corresponding directory structure at the target HPSS through HRM, and puts the list of requested files in the target hierarchical resource manager (HRM). The target HRM then contacts the HRM at the data source to stage the files and uses GridFTP to transfer the data.

 Without DataMover, users would have to manually locate each of the files, then transfer them one by one. Because STAR generates about 400 terabytes of data each year, automating the transfer is critical. The use of GridFTP with large windows, as well as staging, transferring, and archiving multiple files concurrently enable effective end-to-end transfer rates.

“A nice milestone and one more step in the path to success for the LBNL/SRM team. I have to renew my thanks and gratitude for such a tool: it has made our lives so much easier in STAR for the past few years,” said Jerome Lauret of Brookhaven. “Thanks and congratulations for a successful demo.”

The DataMover also automatically addresses “transient failures,” such as failed network connections or problems in a storage system at either end, by automatically retrying until the connection is re-established. File tracking logs also help users monitor problems such as network slowdowns, files transfers and bottlenecks. In addition, the massive transfer operation also records the files in a file catalog in the target NERSC site.

“In the past, users had to baby-sit all the transfers,” said Alex Sim, a member of the Scientific Data Management Research Group and one of the developers of the application. “This application automates all those processes.” Various components of this system were developed by Sim, Junmin Gu and Vijaya Natarajan under the leadership of Arie Shoshani, who is the PI for the Storage Resource Management project at LBNL.

The DataMover can work with any storage system that supports an SRM middleware interface. The DataMover is also currently being used by the Earth Systems Grid climate research collaboration between Berkeley Lab, Oak Ridge and Lawrence Livermore national labs, and the National Center for Atmospheric Research. Recently DataMover transferred 4,224 files containing 770 gigabytes from the HPSS at Oak Ridge to NCAR’s specialized Mass Storage System. For this purpose the HPSS-HRM was adapted to work with NCAR’s MSS.


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.