A-Z Index | Directory | Careers

Improving Training for Scientific Machine Learning

Physics-Informed Neural Networks Failure Modes and Solutions

March 3, 2023

By Carol Pott
Contact: cscomms@lbl.gov

In the world of scientific machine learning (SciML), scientists are beginning to embrace the use of neural networks as a way to accelerate simulations. At the heart of deep learning algorithms, neural networks provide a mechanism to encode complex dependency structures, using many connected node layers to transform data into learned features to be used for a wide range of scientific tasks.

Spanning disciplines from quantum mechanics to health sciences, many successful machine learning (ML) and artificial intelligence (AI) methodologies aim to use neural networks either to complement or to speed up traditional computational models by training on various combinations of experimental and synthetic data. To a scientist, that improvement can mean the difference between a lifetime studying one dataset and a career studying multiple systems in much finer detail.

Despite their popularity, neural networks can be difficult to train and employ on large-scale problems. This is well-known for computer vision and natural language processing problems (more traditional ML application areas), but it is becoming increasingly clear for scientific problems as well. The challenges with training SciML models can be quite different from the challenges associated with training more general ML models.

A recent paper, Characterizing Possible Failure Modes in Physics-Informed Neural Networks, published in the Proceedings of the 2021 Conference on Neural Information Processing Systems by Lawrence Berkeley National Laboratory (Berkeley Lab) researchers Aditi Krishnapriyan and Michael Mahoney, and their collaborators Amir Gholami, Shandian Zhe, and Robert M. Kirby, has highlighted this fact. The paper considered physics-informed neural networks (PINNs), which incorporate domain knowledge in the form of a differential operator as a soft regularization in the training process, and it demonstrated that, while existing PINN methodologies can be used for relatively simple tasks, they can easily fail to learn relevant phenomena in somewhat more complex situations.

“We demonstrate that PINN methodologies can easily fail to learn relevant physical phenomena for even slightly more complex problems than very simple toy problems and that existing solutions are developed for very specific needs,” said Michael Mahoney, group lead of the Scientific Data Division’s Machine Learning and Analytics Group.

Loss Landscapes, ML Methodologies, and Promising Solutions

The work analyzed several distinct situations of widespread physical interest, including learning differential equations with convection, reaction, and diffusion operators. It showed that the soft regularization in PINNs, which involves partial differential equations (PDE) based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned. Importantly, these possible failure modes are not due to the lack of expressivity in the neural network architecture; instead, the PINN's setup makes the loss landscape hard to optimize.

"The key point is not that a good scientist couldn't find a solution to the problem of training a particular PINN model. They certainly could,” said Mahoney. “Instead, the key point is that that solution would not conform well with ML methodologies, and that creates a friction point for scaling SciML tools by both scientists and ML experts.”

The paper discusses two promising solutions to address these failure modes, each based on ML methodologies. In one solution, the team proposes using curriculum regularization, where the PINNs loss term starts from a simple PDE regularization and becomes progressively more complex as the neural network gets trained. Another approach is to pose the problem as a sequence-to-sequence or step-by-step learning task, rather than learning to predict the entire space-time at once. Their extensive testing showed that these methods could reduce error by up to 1-2 orders of magnitude compared to regular PINN training.

Further Research Exploring Additional Solutions

In two subsequent papers, currently in review, the team has begun to explore additional solutions, with an eye toward establishing the foundations for SciML training at scale.

In Adaptive Self-Supervision Algorithms for Physics-Informed Neural Networks, Mahoney, Shashank Subramanian, Robert M. Kirby, and Amir Gholami studied the impact of the location of the collocation points on the trainability of these models. The team found that the PINN performance can be significantly boosted by changing the location of the collocation points as training proceeds. Specifically, they propose a novel adaptive collocation scheme that progressively allocates more collocation points (without increasing their number) to areas where the model makes higher errors. Additionally, they found that restarting training during any optimization stall leads to better estimates for the prediction error. Training PINNS for these problems can result in up to 70% prediction error in the solution, especially with a system of low collocation points. They also found that the adaptive methods perform consistently on the level or slightly better than standard PINN methods, even for large collocation point regimes.

In Learning Differentiable Solvers for Systems with Hard Constraints, to be published in the Proceedings of the 2023 International Conference on Learning Representations, Mahoney, Krishnapriyan, and collaborator Geoffrey Négiar introduce a practical method to enforce linear PDE constraints for functions defined by neural networks. This required methods in differentiable physics and applications of the implicit function theorem to neural network models to develop a differentiable PDE-constrained neural network layer. During training, the model learns a family of functions, each of which defines a mapping from PDE parameters to PDE solutions. At inference time, the model finds an optimal linear combination of the functions in the learned family by solving a PDE-constrained optimization problem.

“Our initial work showed that there were a number of challenges with using the standard ‘physics-informed’ approach of a soft constraint,” said Aditi Krishnapriyan, a faculty scientist with Berkeley Lab and assistant professor at UC Berkeley. “To address this, we developed a method to incorporate this physical information as a much more strict constraint via a layer in the neural network. This took a lot more engineering and methods development, but ultimately it was interesting to see how much better this second approach did: we were able to get lower error and converge much faster to the correct solution.”

There is more work to be done in finding ways to combine domain information into high-quality ML methods, and the team is continuing to search for more solutions. “A fundamental challenge raised by this work is that of developing SciML methods that are principled both from the scientific perspective as well as from the perspective of ML training and testing methodology,” Mahoney said. “Often in ML, a solution is developed and then science is added as an afterthought, or the science solution is conceived and then ML is added as an afterthought. We want to start bringing both to the forefront to optimize the benefit across disciplines.”

About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.