On February 8, 9, and 10, 2022, the CSA virtually held its third annual Postdoc Symposium at Berkeley Lab, where 12 postdoctoral speakers working at the Lab shared presentations on their research with an audience of peers, mentors, and coworkers. View their individual presentations below.

“Towards Understanding I/O Behavior with Interactive Exploration”

Scientific Data Management Group

Abstract: Using a supercomputer’s available storage resources efficiently is a complex and tedious challenge due to inter-dependencies among multiple layers of I/O software. Although profiling tools can collect traces to help understand the I/O performance behavior, there are significant gaps in analyzing the collected traces and then applying performance tuning options offered by various layers of I/O software. Towards filling these gaps between metric collection, I/O bottleneck detection, and tuning, we developed DXT Explorer, an interactive web-based log analysis tool to visualize Darshan DXT traces. It adds an interactive component to Darshan trace analysis that seeks to aid researchers, developers, and end-users to visually inspect their applications’ I/O behavior, zoom-in on potential performance bottlenecks, and have a clear picture of where the I/O problems are and guide the application towards optimization techniques.

Center for Computational Sciences and Engineering 

Abstract: In this talk we introduce FunFact, a Python package that enables flexible and concise tensor algebraic expressions. FunFact offers a rich syntax based on a hybrid mixture of Einstein-like notation and indexless operations designed to describe complex tensor expressions. It provides users with an intrinsically powerful tool to compute functional factorizations of algebraic tensors. Here, a functional factorization is understood as a generalization of well-known linear tensor factorizations such as CP and Tucker decompositions. Because of their increased generality, functional factorizations can yield more compact representations of tensorial data compared to what is possible within existing linear frameworks. An exciting example is shown in the form of radial basis function (RBF) approximations. We further illustrate the use and flexibility of FunFact with example applications for image compression, neural network compression and quantum circuit synthesis. FunFact is GPU- and parallelization-ready thanks to modern numerical linear algebra backends such as JAX/TensorFlow and PyTorch.

“Parametric Amplification of a Quantum Transducer”

Applied Computing for Scientific Discovery

Abstract: Quantum transducer technologies that convert between static, microwave qubits and flying, optical qubits are the back-bone infrastructure technology and enabler for the Quantum Internet. Among existing platforms, optomechanical transduction is a mature technology with demonstrated high conversion efficiency between microwave and optical photons. Current demonstrations are based on cavities with constant driving fields to pump photons and amplify the optomechanical coupling strength. Given the advances in pulsed control for superconducting circuits, we proposed pulsed driving fields to enhance the conversion efficiency. We present a theoretical framework for time-dependent control of the driving lasers based on the input-output formalism of quantum optics and show how pulsed control schemes can enhance the conversion efficiency. Our results pave the way for more advanced optomechanical control protocols.

“Enabling Secure Learning for Clean Energy Systems”

Integrated Data Frameworks Group

Abstract: The carbon neutrality goal calls for practical solution for revolutionizing our energy systems, ranging from power grids to electric vehicles. The huge level of system complexity and increasing uncertainties coming from renewables bring unprecedented challenges for designing operation and control schemes for the clean energy system. In this work, we propose to safely integrate machine learning into a set of decision-making problems, such as voltage regulation, building energy management and EV charging schemes. In order to satisfy physical constraints and meet the reliability needs of power grids, we show a set of techniques to design safety learners that give performance guarantees for the designed algorithms. Examples on both real-world data and simulation test cases will be demonstrated.

“HYPPO: A surrogate-based, UQ-informed, and multilevel parallelism HPO tool”

Center for Computational Sciences and Engineering (CCSE)

Abstract: We present a new software, HYPPO, that enables the automatic tuning of hyperparameters of various deep learning (DL) models. Unlike other hyperparameter optimization (HPO) methods, HYPPO uses adaptive surrogate models and directly accounts for uncertainty in model predictions to find accurate and reliable models that make robust predictions. Using asynchronous nested parallelism, we are able to significantly alleviate the computational burden of training complex architectures and quantifying the uncertainty.

“Performance Portability in Materials Simulations”

NERSC

Abstract: Heterogeneous computing poses new challenges to HPC application developers who wish to remain productive while targeting the diverse architectures of current and upcoming HPC systems. While the evolution of programming models and tools is lowering the barrier to achieving performance on a given architecture, retaining performance portability on a variety of novel architectures has become ever more challenging. Using linear algebra routines extracted from the WEST code, a massively parallel implementation of many-body perturbation theory, we examine the performance and portability of several programming models: OpenACC and CUDA Fortran directives, standard Fortran (e.g., \texttt{do concurrent}, \texttt{dot\_product}, and \texttt{matmul}), and vendor-optimized BLAS routines, on CPU and GPU platforms.

“Adaptive Variational Quantum Simulation”

Applied Computing for Scientific Discovery

Abstract: We propose a general-purpose, self-adaptive approach to construct variational wavefunction ansatze for highly accurate quantum dynamics and imaginary time simulations based on McLachlan’s variational principle. The key idea is to dynamically expand the variational ansatz along the time-evolutionpath such that the “McLachlan distance”, which is a measure of the simulation accuracy, remains below a set threshold. We apply this adaptive variational imaginary time evolution (AVQITE) to prepare ground states of molecules, where it yields compact variational ansatze and ground state energies within chemical accuracy.  The adaptive quantum circuits that prepare the time-evolved state are much shallower than those obtained from first-order Trotterization and contain up to two orders of magnitude fewer CNOT gate operations. We envision that a wide range of dynamical and ground state simulations of quantum many-body systems on near-term quantum computing devices will be made possible through the adaptive framework.

“3D Deep Learning Models for Molecular Property Prediction”

Applied Computing for Computational Research Group

Abstract: Deep learning methods provide a novel way to establish a correlation between two quantities. In this context, computer vision techniques like 3D-Convolutional Neural Networks (3D-CNN) become a natural choice to associate a molecular property with its structure due to the inherent three-dimensional nature of a molecule. However, traditional 3D input data structures are intrinsically sparse in nature, which tend to induce instabilities during the learning process, which in turn may lead to under-fitted results. To address this deficiency, in this project, we propose to use quantum-chemically derived molecular topological features, namely, Localized Orbital Locator (LOL) and Electron Localization Function (ELF), as molecular descriptors, which provide a relatively denser input representation in three-dimensional space. Such topological features provide a detailed picture of the atomic and electronic configuration and inter-atomic interactions in the molecule and hence are ideal for predicting properties that are highly dependent on the physical or electronic structure of the molecule. Herein, we demonstrate the efficacy of our proposed model by applying it to the task of predicting atomization energies for the QM9-G4MP2 dataset, which contains ~134-k molecules. Furthermore, we incorporated the Δ-ML approach into our model, which enabled us to reach beyond benchmark accuracy levels (~1.0 kJ mol−1). As a result, we consistently obtain impressive MAEs of the order 0.1 kcal mol−1 (~ 0.42 kJ mol−1) versus G4(MP2) theory using relatively modest models, which could potentially be improved further in a systematic manner using additional compute resources.

“Particle-in-Cell Simulations of Relativistic Magnetic Reconnection”

Center for Computational Science and Engineering

Abstract: Magnetic reconnection is a microphysical plasma process by which anti-parallel magnetic field lines quickly flow towards one another other and reconnect, converting stored electromagnetic energy to kinetic energy. Using the Particle-In-Cell code WarpX, I perform detailed, first-principles simulations of relativistic reconnection. These calculations zoom in on a pair of current sheets, representative of those found in extreme astrophysical settings like gamma-ray bursts, magnetars, and stellar coronae. I use these simulations to quantify the properties of reconnection and to compare the accuracy and performance of finite-difference and higher-order pseudo-spectral methods. These results will then be used to refine models of reconnection that are used in larger-scale simulations such as those of pulsar magnetospheres.

 

 

“Exascale Modeling of Electromagnetics with Applications to Microelectronics and Particle Accelerators”

Center for Computational Sciences and Engineering

“GPTune: Non-smooth and Mixed Variables in Surrogate Models”

Scalable Solvers Group

Abstract: In this talk, we would present advances and recent researches in black-box optimizations using Gaussian process surrogate model.

“Faster Algorithms for Tensor Ring Decomposition”

Scalable Solvers Group

Abstract: In this talk, I will discuss recent work on a sampling-based method for computing the tensor ring (TR) decomposition of a data tensor. The method uses leverage score sampled alternating least squares (ALS) to fit the TR cores in an iterative fashion. By taking advantage of the special structure of TR tensors, we can efficiently estimate the leverage scores and attain a method which has complexity sublinear in the number of input tensor entries. We provide high-probability relative-error guarantees for the sampled least squares problems. We will also show some results from numerical experiments and discuss ways forward for achieving further computational complexity improvements for ALS-based TR decomposition.

“Mosaic Flow: Transferable Flow Prediction”

Center for Computational Sciences and Engineering

Abstract: In this talk, we present some tensor equation variants of Møller–Plesset method of the second order (MP2) in quantum chemistry to compute electron correlation energy. Specifically, using the structure of Kronecker products, we rewrite the target linear system into a Sylvester tensor equation, and develop various solvers based on structures of different chemical formulations. In particular, we develop a factored alternating direction implicit (fADI) method utilizing data sparsity in canonical orbital representation, and a sparsity enforcement Krylov subspace method from sparsity in localized orbital representation. We provide complexity analysis of our tensor equation solvers, and numerical results show that these methods are faster than traditional linear system solvers for realistic test cases from chemical experiments.

Last edited: March 10, 2025