A-Z Index | Directory | Careers

Physics Constrained Machine Learning

Partial differential equations (PDEs) are commonly used to describe different phenomena in science and engineering. These PDEs are often derived by starting from governing first principles (e.g., conservation of mass or energy). It is typically not possible to find analytical solutions to these PDEs for many real-world problems. Thus, many different numerical methods (for example, the finite element and pseudo-spectral methods) have been introduced to approximate their solutions/behavior. However, these PDEs can be quite complex in several settings (such as turbulence simulations). Numerical integration techniques, which typically update and improve a candidate solution iteratively until convergence, can be prohibitively computationally expensive. Motivated by this – as well as the increasing quantities of data available in many scientific and engineering applications – Berkeley Lab scientists are developing machine learning (ML) approaches to find the solution of the underlying PDEs (and/or work in tandem with numerical solutions). Our scientists are also working to couple traditional scientific-mechanistic modeling (typically, differential equations) with data-driven ML methodologies (most recently, neural network training), to identify and address possible failure modes with existing methodologies and use those insights to develop improved training protocols for scientific ML problems.

Projects

Lorentz Covariant Neural Network

We introduced approximate Lorentz equivariant neural network (NN) architectures to address key high-energy physics (HEP) data processing tasks. Symmetries arising from physical laws like Einstein's special relativity can improve the expressiveness, interpretability, and resource efficiency of NNs. Contact: Xiangyang Ju (Ju on the Web)

Domain-Aware, Physics-Constrained Autonomous Experimentation

The prior probability density functions of Gaussian processes are entirely defined by a covariance or kernel function and a prior mean function. Chosen correctly, the kernel function has the capability to constrain the solution space to only contain functions with certain domain-knowledge-adhering properties. Furthermore, the training itself can be formulated as a constrained optimization problem. gpCAM, our python software package for autonomous experimentation and Gaussian-process function approximation, is tailored to allow the user to inject physical knowledge via the kernel and the prior mean function. The solutions are significantly more accurate, with more realistically-estimated uncertainties, and can approximate functions using fewer data points. Contact: Marcus Noack

The Chemical Universe Through the Eyes of Generative Adversarial Neural Networks

This project is developing generative machine learning models that can discover new scientific knowledge about molecular interactions and structure-function relationships in chemical sciences. The aim is to create a deep learning network that can predict properties from structural information but can also tackle the “inverse problem,” that is, deducing structural information from properties. To demonstrate the power of the neural network, we focus on bond breaking in mass-spectrometry, combining experimental data with HPC computational chemistry data. Funded by a Lab Directed Research and Development (LDRD) grant. Contact: Bert de Jong (de Jong on the Web)