Berkeley Lab Projects Advance Running, Scheduling of Scientific Workflows on HPC Systems
December 4, 2017
Contact: Jon Bashor, email@example.com, 510-486-5849
Researchers are increasingly turning to high performance computing (HPC) systems to carry out scientific workflows, which are executed as a series of steps and programs to study complex problems. However, achieving this can require a number of time-consuming manual tasks by the user and doesn’t always make the most efficient use of the system.
Recently, researchers at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab) released publicly available software that allows HPC scheduling systems to automatically address these issues.
The software is the result of a two-year collaboration between staff in Berkeley Lab’s Computational Research Division and the Distributed Systems Group at Umeå University in Sweden.
“We looked at the infrastructure that supports HPC for workflows involving simulations and experimental or observational data and came to the conclusion that it used inefficient methods,” said Lavanya Ramakrishnan of the Usable Software Systems Group in the lab’s Data Science and Technology Department. “Currently, HPC schedulers have no knowledge of the workflow structure. Users submit an entire workflow as a single job or submit each stage as an individual job. But if the scheduler could see the entire view of the workflow, including the future jobs in the pipeline, we could potentially have a lot of impact on increasing efficiency.”
As an example, Ramakrishnan said that a workflow submitted as a single a job could use 256 processors at one point, but then only a single processor during another stage. But schedulers used today would hold all 256 processors for the entire run. And since large systems typically run several jobs at the same time, that means that many processors can be sitting idle, which wastes computing cycles and electricity. The only way around this is for the user to submit the workflow as a series of separate jobs, which is not of good use of his or her time and incurs long wait times in the job queue.
In June, the team presented WoAS, or Workflow-Aware Scheduling system, at the 26th International Symposium on High-Performance Parallel and Distributed Computing. WoAS enables existing scheduling algorithms to exploit the fine-grained information of a workflow's resource requirements and structure without modification. The team has developed an implementation of WoAS for Slurm, a widely used HPC batch scheduler.
Now the WoAS code has been released as an open-source project and made available to scheduling researchers and developers.
Getting to WoAS
Much of the work leading up to WoAS was done by Gonzalo Rodrigo Álvarez, who was earning his Ph.D. in computer science at Umeå University. He was working on the Frieda project analyzing HPC workloads and when he talked to scientists at Berkeley Lab he heard their tales of woe regarding workflows: the need for different resources, the long wait times between various pieces of the work and the resulting long turnaround times for completed workflows.
“I thought it would be pretty easy to do research into schedulers, but I’d need a number of tools to do it,” said Rodrigo, who received his Ph.D. in April 2017. “We thought we could find those tools in the HPC community, but there weren’t many available, and those that we found were very old, and didn’t reflect the state of the art at all.”
That lack led to the second component of the project: the open-source Scheduler Simulation Framework, or ScSF.
“We needed something to test our WoAS algorithms, but there weren’t any solid simulators that captured the behavior of real HPC systems,” Rodrigo said. “ScSF allows us to cover all the steps of scheduling research through simulation. It provides capabilities for workload modeling, workload generation, system simulation, comparative workload analysis and experiment orchestration.”
To get simulated results that the team had confidence in, they needed to run large numbers of scenarios that took all of the variables into account. They ran scenarios over and over, often in parallel, just as a production HPC system would be used. When they were done, they had enough runs to equal 30 years of a simulated lifespan of an HPC system. Then they had to analyze, measure and compare the results in order to extract the data that showed their algorithms worked as envisioned.
ScSF was presented in June at the 21st workshop on Job Scheduling Strategies for Parallel Processing. More recently, ScSF has been released as an open-source project and other researchers will take advantage of its features.
WoAS and ScSF are available for download at http://frieda.lbl.gov/download
This work was supported by the DOE Office of Science (Office of Advanced Scientific Computing Research) and used resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility.
Berkeley Lab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
Financial support has also been provided in part by the Swedish Government's strategic effort eSSENCE and the Swedish Research Council (VR) under contract number C0590801 (Cloud Control).
About Computing Sciences at Berkeley Lab
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.
ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities.
Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.