Using NERSC Systems, Physicists Close In on a Rare-Particle Decay Process
Underground Experiment May Unlock the Mysteries of the Neutrino
June 11, 2012
NERSC Contact: Linda Vu, firstname.lastname@example.org, +1 510 495 2402
With help from supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), the Enriched Xenon Observatory experiment (EXO-200) has placed the most stringent constraints yet on the nature of a process called neutrinoless double beta decay. In doing so, the physicists have narrowed down the range of possible masses for the neutrino—a tiny uncharged particle that rarely interacts with anything, passing right through people and planets at nearly the speed of light.
If discovered, this process could have profound implications for how scientists understand the fundamental laws of physics and help solve some of the universe’s biggest mysteries—including why there is more matter than antimatter and, therefore, why regular matter like planets, stars and humans exist at all.
“NERSC’s resources were essential to our data analysis efforts; we used the center’s Hopper system to effectively fit the experimental data and extract the final results,” says Michael Marino, a postdoctoral researcher at Germany’s Technical University of Munich and member of the EXO-200 collaboration.
EXO-200 is an international collaboration led by Stanford University and DOE's SLAC National Accelerator Laboratory. The team, consisting of 80 researchers, has submitted a paper describing the results to Physical Review Letters. Lawrence Berkeley National Laboratory (Berkeley Lab) manages NERSC for DOE.
The Problem with Neutrinoless Double-Beta Decay
In normal double-beta decay, which was first observed in 1986, two neutrons in an unstable atomic nucleus turn into two protons. And in the process, two electrons and two antineutrinos—the antimatter counterparts of neutrinos—are emitted.
But physicists have suggested that two neutrons could also decay into two protons by emitting two electrons without producing any antineutrinos. This implies that the two neutrinos produced in this “neutrinoless” double beta decay somehow cancel each other out.
For this to happen, a neutrino must be its own antiparticle, allowing one of the two neutrinos to act as an antineutrino and annihilate the other. However, the widely accepted scientific theory that describes how all elementary particles behave and interact—called the Standard Model—does not predict that a neutrino can be its own antiparticle. So if this neutrinoless process does indeed exist, physicists would be forced to revise the Standard Model.
"People have been looking for this process for a very long time," says Petr Vogel, senior research associate in physics, emeritus, at Caltech and a member of the EXO-200 team. "It would be a very fundamental discovery if someone actually observes it."
Computer Models and Non-Detection Hint at a Half-Life
According to Marino, the EXO-200 team has a detailed computer model that predicts the physics processes occurring inside the observatory. After each experiment, researchers compare or “fit” the experimental results to find the parameters of the computational model that describes the results best.
In this case, the first results showed no signal for neutrinoless double beta decay in almost seven months of data. When the team fit these results with their computational parameter that describes the rate of netutrinoless double-beta decay of Xe-136, they were able to rule out possible values for the half-life of the neutrinoless process.
Marino also notes that the NERSC resources were crucial to performing a statistical analysis of the data. In their paper, the team proclaims at 90 percent confidence level that the neutrinoless double-beta decay of Xe-136 half-life cannot be shorter than 1.6 × 1025 years, or a quadrillion times older than the age of the universe.
“This ‘confidence level’ statement essentially means that if we were to run this experiment 1,000 times, we would expect to calculate limits on the neutrinoless double-beta decay rate of Xe-136 that enclose the true rate 900 times,” says Marino. “To check this in practice, we cannot run this experiment 1,000 times, but we did run simulations of this experiment 1,000 times on Hopper.”
With the value of the half-life pinned down, physicists can calculate the mass of a neutrino—another longstanding mystery. The new data suggest that a neutrino cannot be more massive than about 0.140 to 0.380 electron volts (eV, a unit of mass commonly used in particle physics); an electron, by contrast, is about 500,000 eV, or about 9 × 10-31 kilograms.
In the EXO-200 experiment, physicists monitor a copper cylinder filled with 200 kilograms of liquid xenon-136, an unstable isotope that theoretically can undergo neutrinoless double beta decay. Very sensitive detectors line the wall at both ends of the cylinder. To shield it from cosmic rays and other background radiation that may contaminate the signal of such a decay, the apparatus is buried deep underground in the DOE's Waste Isolation Pilot Plant in Carlsbad, New Mexico, where low-level radioactive waste is stored. The physicists then wait to see a signal.
The process, however, is very rare. In normal double-beta decay, half of a given sample would decay after 1021 years—a half-life roughly 100 billion times longer than the time that has elapsed since the Big Bang.
If neutrinoless double beta decay is discovered, Vogel notes that could have implications for cosmology and the origin of matter. Right after the Big Bang, the universe had the same amount of matter as antimatter. Somehow that balance was tipped, producing a slight surplus in matter that eventually led to the existence of all of the matter in the universe. If a neutrino can in fact be its own antiparticle, this might have played a key role in tipping that balance.
The EXO-200 experiment, which started taking data last year, will continue its quest to measure the half-life of the neutrinoless process for the next several years.
The EXO collaboration involves scientists from SLAC, Stanford, the University of Alabama, Universität Bern, Caltech, Carleton University, Colorado State University, University of Illinois Urbana-Champaign, Indiana University, UC Irvine, Institute for Theoretical and Experimental Physics (Moscow), Laurentian University, the University of Maryland, the University of Massachusetts–Amherst, the University of Seoul, and the Technische Universität München. This research was supported by the DOE and the National Science Foundation in the United States, the Natural Sciences and Engineering Research Council in Canada, the Swiss National Science Foundation, and the Russian Foundation for Basic Research.
This feature was adapted from a story written by Marcus Woo and originally published on the Caltech website: http://media.caltech.edu/press_releases/13520.
About Computing Sciences at Berkeley Lab
High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.