Berkeley Lab Scientists Create Machine Learning Pipeline for Interpreting Large Tomography Datasets
January 25, 2023
By Keri Troutman
Advances in biological imaging have given scientists unprecedented datasets with extremely high resolutions, yet data interpretation tools are working overtime to keep up. This is particularly evident in the case of cryo-electron tomograms (cryo-ET), where the samples exhibit inherently low contrast due to the limited electron dose that can be applied during imaging before radiation damage occurs.
The segmentation of these cell tomograms remains a challenging task, one that is most accurately performed by human beings with an extensive amount of time on their hands. Since this isn’t a feasible way to interpret large datasets, a group of Berkeley Lab scientists recently developed and tested several machine learning techniques organized in a learning pipeline to segment and identify cryo-ET cell membrane structures. A paper describing their approach, “A machine learning pipeline for membrane segmentation of cryo-electron tomograms,” was published this month in the Journal of Computational Science.
“One of the main difficulties with these types of images is that they’re very noisy,” said Chao Yang, a senior scientist in the Applied Mathematics and Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) and one of the paper’s authors. “It’s the main challenge when you are trying to detect some type of structure or segment the images — it could take one scientist several months to get one tomogram all segmented correctly.”
Although a number of automated segmentation algorithms and tools have been developed in the last few decades for high contrast medical 3D imaging, most of them perform poorly on cryo-ET because the datasets have a low signal-to-noise ratio as well as missing-wedge artifacts caused by the limited sample tilt range that is accessible during imaging. Given the complexity of the segmentation task and the inherent challenge in obtaining high-quality tomograms, the researchers knew that it was unlikely a single image processing or machine learning technique would produce satisfactory results, so they set out to develop an image analysis and segmentation pipeline that combined various methods.
The project was an LDRD-funded (Laboratory Directed Research and Development Program) collaboration with scientists Nick Sauter and Karen Davies from Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging (MBIB) division. The team used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab to test their methods and further refine the pipeline approach.
“There are a bunch of existing machine learning algorithms out there already, mostly related to medical imaging, but when you try to apply them to cryo-ET they just don’t work, mostly because of the low signal-to-noise ratio,” said Talita Perciano, a research scientist in Berkeley Lab’s Scientific Data Division and another paper co-author. “Recently there’s been a lot of development in using convolutional neural networks (CNNs) to solve these sorts of image segmentation problems, so we looked at those studies and tried them out and found we could get pretty good segmentation, but not perfect. So we knew we had to incorporate other machine learning methods as well.”
A Multi-pronged Approach
The research team knew that a human scientist can do a much better job than a computer program at segmenting and extracting membrane structures because the scientist has prior knowledge about the biological object to be segmented, so they incorporated that idea into their process via several machine learning methods:
- They combined multiple machine learning techniques to enhance the segmentation results produced by a CNN-based procedure. They used a popular CNN-based segmentation tool, U-Net, that identifies membrane structures to match the geometric motifs used in the training data from tomogram slices.
- They used reinforcement learning algorithms to connect multiple segmented pieces that belong to the same membrane structure.
- They applied classification algorithms to separate different membrane structures and place fragments of the same structure into the same group.
- They employed parametric and non-parametric fitting algorithms to produce a smooth and continuous surface representation of membranes.
“The neural networks leave gaps in certain areas, so we used reinforcement learning to sort of trace out what the contour might look like and then combined that with Gaussian process based machine learning techniques to smooth out the surface a little bit,” said Chao. “With this system in place that we’ve developed, we’re looking at going from this process taking months with real human biologists to it taking weeks, perhaps even just days.”
The biological impact of this type of machine learning system is a much more expansive insight into how various cell structures support its function. “If it were just one cell, we could do this by hand, but the real potential is to view the same cellular structure over the entire life cycle of the organism, and how it changes under different environmental conditions and external stimuli,” said Sauter, the MBIB senior scientist who co-authored the paper. “With large data, the new machine learning techniques can help discover how the diverse ensemble of structures support function in the cell."
By combining various techniques, the researchers were able to develop a system that takes less time and delivers a better result, said Perciano. The approach worked well for segmenting membrane surfaces in two large biological datasets, and given that it is quite flexible it should be able to be applied to different datasets with minimal modification.
“Cryo-EM and tomography has exploded in the last decade, so scientists are getting a lot of structures, but these structures need to be interpreted,” said Chao. “So now the challenge is doing just that, and if machine learning tools can help speed up that process, it will have a big impact on biological research.”
NERSC is a U.S. Department of Energy Office of Science user facility.
About Computing Sciences at Berkeley Lab
High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.