Advance
Berkeley Lab’s Fiats software library brings deep learning directly to modern Fortran, enabling scientists to train and deploy neural network surrogate models without leaving the language that powers much of high‑performance computing. Fiats demonstrated large‑scale capability by performing batch inferences on NERSC’s Perlmutter supercomputer with an aerosol surrogate trained on Energy Exascale Earth System Model (E3SM) data. Using Fortran’s “do concurrent” loop‑parallelism, Fiats achieved CPU performance competitive with OpenMP compiler directives — proving that AI‑enabled science in native Fortran can be both portable and fast.
Background
Fortran, the world’s first widely used programming language, still accounts for a significant share of computing time on the world’s leading supercomputers and remains central to large‑scale modeling in computational science and engineering. Yet most deep-learning tools are designed for Python, C++, or specialized machine learning frameworks, leaving a gap for Fortran‑based codes to integrate AI techniques. Fiats fills this gap through close collaboration with LLVM Flang compiler developers in the U.S. and Europe, ensuring Fortran evolves to meet the needs of artificial intelligence for science. Potential applications include accelerating aerosol calculations in E3SM and cloud microphysics in the Intermediate Complexity Atmospheric Research (ICAR) model.
Breakdown
The Fiats interface is built primarily from functions that conform to Fortran’s constraints on “pure” procedures, which are the only procedures callable within Fortran’s loop-parallel construct: “do concurrent”. This design eases parallel integration while maintaining portability. Fiats also makes novel use of a language feature that avoids dynamic dispatch, reducing overhead and paving the way for GPU execution. These design choices allow Fiats to run efficiently on CPUs today while providing a clear path toward GPU acceleration in the future.
Download the Software
Contributors
Damian Rouson, Zhe Bai, Dan Bonachea, Katherine Rasmussen, David Torres
About Computing Sciences at Berkeley Lab
High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.