A-Z Index | Directory | Careers

Graph Neural Networks Open New Doors in Particle Physics Research

Q&A with NERSC's Steve Farrell looks at the past, present, and future of machine learning in HEP

August 11, 2020

By Kathy Kincade

Contact: cscomms@lbl.gov

Steve Farrell

Steve Farrell, physicist and machine learning specialist at NERSC.

Next-generation machine learning methods are gaining attention in high energy physics (HEP) as they demonstrate more efficient approaches to particle physics research at experimental facilities such as the Large Hadron Collider (LHC) at CERN and the SLAC National Accelerator Facility at Stanford University.

Surprisingly, machine learning is not new to the HEP research community. Before the excitement of deep neural networks in recent years, in the 1990s traditional machine learning methods were used in the Tevatron experiments at Fermilab and the Large Electron-Positron Collider experiments at CERN. “They used shallow neural networks and decision trees in the trigger and particle identification applications,” notes Steve Farrell, a physicist and machine learning specialist at the National Energy Research Scientific Computing Center, a U.S. Department of Energy user facility based at Lawrence Berkeley National Laboratory.

In this Q&A, Farrell discusses how high-performance data analysis tools are helping the HEP research community find more efficient ways to sift through, reconstruct, and analyze the increasing amounts of complex detector data available to them as they study the subatomic particles that constitute matter and radiation.

What makes machine learning a good fit for HEP research?


One of the things driving these efforts is that HEP data is getting larger. At colliders like the LHC, for example, the sizes and collection rates of collision events, as well as the overall dataset volume, are growing. To understand this data there is a corresponding need for more simulation data, which is expensive to produce. The algorithms used to reconstruct and analyze this data also suffer with the growing complexity. Taken together, these effects mean that the community is running into problems with computing resources. Machine learning solutions present new ways to solve computational problems, and in some applications there is hope that they may be able to replace or supplement hand-written code and run more efficiently on modern computing hardware.

We have complex workflows in HEP consisting of many small pieces related to the simulation, reconstruction, and final analysis of collision events. It turns out that machine learning solutions can be incorporated into almost any part of the workflow. While traditional algorithms for things like reconstructing and identifying particles are designed using our best knowledge of physics and our detectors, we often find that machine learning models that can automatically learn from the data can be more powerful.

Are some of the newer machine learning techniques finding traction in HEP research?


Many people involved in the HEP experiments are interested in cutting-edge data-analysis methods and applications, so there’s been a lot of excitement and rapid growth in exploring deep learning. Dedicated groups have been formed within the HEP experiments, such as the Inter-experimental Machine Learning working group at CERN, and we have started to see a number of machine learning papers and posters at HEP conferences as well.

Visualization of a graph neural network finding particle tracks in a simulated collision event from the TrackML dataset. The model takes an input graph of connected hits in the detector and iteratively figures out which connections represent hits coming from the same particle while pruning away spurious connections. (Credit: Exa.TrkX Project)

 We do see that deep learning methods are working their way into parts of the HEP workflows. The early use cases include things like particle identification where you have low-level detector features and need to extract the high-level physics information that you can subsequently analyze. The research community seems to feel comfortable having these advanced methods at this level in the data processing pipeline where you can still understand or mitigate effects from systematic errors introduced by the methods.

What has been slower, though, has been getting these methods into production in the simulation and final analysis steps, and getting the results - real physics results, real measurements – published in peer-reviewed journals. The HEP research community has rigorous requirements when it comes to getting a physics analysis published; you have to go through many steps of approval, and there are many chances for people in a large collaboration to raise criticisms and make you go back and do more work. So it is taking a while to see journal papers published that have measurements based on deep neural networks. But we are making progress.

What is the focus of your current research?


At present, I am involved in a particle tracking project known as Exa.TrkX, a collaboration of data scientists and computational physicists from the ATLAS, CMS, and DUNE experiments that is developing deep learning methods aimed at reconstructing millions of particle trajectories per second from petabytes of raw data produced by the next generation of detectors.

As part of this project, we started looking at applying models known as graph neural networks to the problem of particle tracking and found that this approach has a number of advantages compared to other machine learning approaches. Particle tracking systems in HEP experiments have extremely high granularity sensors that record positions of charged particles as they traverse the detector. Tracking algorithms must take these resulting point clouds of measurements and connect the dots to reconstruct the many particle tracks in each collision. While we initially tried turning this data into images to apply computer vision methods, we ran into limitations because of the high dimensionality and sparsity of the data. But we found that if you natively treat the data as point clouds and use methods designed for such data structures like graph neural networks, you can take full advantage of the tracking detector and have much more success.

We now have a team at Berkeley Lab working on this that includes Paolo Calafiura, Sean Conlon, Xiangyang Ju, Daniel Murnane, and Nick Choma, and we’ve made a lot of progress in scaling up these graph neural nets to work with really complex data. At this point, we are looking at fairly realistic data and showing that we can get promising results in finding tracks in this data, and we are seeing a lot of interest in these kinds of methods from other research groups as well.  For example, at the recent “Connecting the Dots” workshop, at least five presentations involved work done by the Exa.TrakX collaboration, and other folks at the event were talking about graph neural network applications for particle tracking and applying this approach to their experiment data.

What do you see as the long-term impact of this research for the HEP community?


Our end goal is to come up with a reconstruction algorithm based on graph neural networks that can do a better job of finding tracks or do it faster than the traditional solutions so it can replace these solutions in the reconstruction pipeline. By improving this part of the workflow we can enhance the capabilities of the experiments to do the full physics analysis, potentially improving the reach of searches for new physics like supersymmetry and new heavy particles.

It is all very exciting at this point, and we are hopeful that we will begin to see these things evaluated in realistic scenarios to see if they can be reasonable solutions for particle tracking in the future. We’d like these approaches to be demonstrated for the current generation of experiments and the upgraded ones like ATLAS and CMS, but also to be used in future experiments like CERN’s Future Circular Collider. As we progress to the next generation of HEP experiments, the data challenges will increase by even another order of magnitude. There will continue to be larger and denser datasets, so it is important to be thinking about solutions that will work now and also down the line.


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.