Computing steps up to capture, keep carbon dioxide underground
January 25, 2012
Ramgen Power Systems
University of Texas
- See more at: http://ascr-discovery.science.doe.gov/feature/carbon1.shtml#sthash.TWMdavSb.dpuf
Producing electricity to power our homes and businesses while also reducing carbon dioxide emissions remains difficult. Fossil fuels provide most electricity in the United States. Coal-burning plants produce nearly three quarters of the nation’s power, simultaneously spewing carbon dioxide into the atmosphere.
Even with innovations in renewable energy, reaching a goal to reduce U.S. carbon dioxide emissions to levels 20 percent below 1990 levels by 2020 will require developing safe, effective ways to capture and store the CO2 fossil fuels release as they burn.
Many of the ideas underlying carbon capture and sequestration are either in laboratory tests or pilot studies. Using traditional scale-up processes to commercialize new research ideas in the power industry has historically taken 20 to 30 years, says David Miller, a technical team lead for the Carbon Capture Simulation Initiative at the DOE’s National Energy Technology Laboratory (NETL) in Morgantown, W.Va.
DOE supports several computational projects to accelerate this process. Researchers use supercomputers to speed designs for new materials that sponge up carbon dioxide and systems that facilitate its large-scale capture. Other scientists use computation to better understand the geological and physical processes and lay the groundwork for approaches that will sequester gas underground.
Grabbing the carbon dioxide
As power plants burn coal, heat converts boiler water to steam, which spins turbines and generates power. The primary combustion product is carbon dioxide, part of a mixture of gases that includes nitrogen and water vapor.
To store only carbon dioxide from this combustion process, researchers need materials that sponge it up while letting other gases escape. At the same time, they’d like to reuse these absorbent materials, so they also need ones that eventually release the carbon dioxide through processes such as heating.
A variety of materials structured with nano-sized gaps are possible candidates for carbon dioxide collection, including crystalline porous substances, zeolites and metal organic frameworks, says Maciej Haranczyk of Lawrence Berkeley National Laboratory. Such materials aren’t easy to synthesize in a laboratory, so researchers would like to focus on specific structures with promising features.
That’s where one DOE Energy Frontier Research Center (EFRC) is looking. (See sidebar, “Energy Frontier Research Centers cover innovation spectrum.”) The EFRC, led by Berend Smit and Jeffrey Long at the University of California, Berkeley, is developing both laboratory and computational methods to find new nanoporous materials that absorb carbon dioxide.
Such materials can be expensive and time-consuming to synthesize in a laboratory; they also can eat huge amounts of computer resources, says Haranczyk, who helps with the computational portion of the problem. “It requires a week of calculation for one material,” he says. “For an entire database you’d need a hundred thousand years of CPU time.”
So Haranczyk and his colleagues have borrowed techniques from drug discovery, using informatics tools that help pharmaceutical companies profile characteristics that make drug compounds more effective. As a result, EFRC researchers can efficiently sample materials from a large database, concentrating only on structures that bring statistically important, new information. By comparing the shape and topology of materials and the voids within them, they learn how those differences translate into material properties. The researchers use this knowledge to perform further searches focused only on structures likely to exhibit interesting properties.
In particular, the researchers want to maximize the number of pores in these materials that will preferentially absorb carbon dioxide. Haranczyk and his colleagues identify interesting materials, then pass them on to Smit’s group, which runs more intensive computational simulations on fewer candidate substances.
The work is starting to show results. Recently the team identified a couple of materials that could outperform known carbon capture technologies.
Multiple scales of carbon capture
Initially separating carbon dioxide from coal-powered plant flue gas is just one piece of the carbon capture process and the devices involved. Researchers from NETL and four other national laboratories – Lawrence Berkeley, Los Alamos, Pacific Northwest and Lawrence Livermore – are working on simulating the whole capture system.
Ramping up carbon capture from the laboratory to full-scale power plants typically would require building many plants of varying size to test scenarios. Computation can help compress that lengthy process, Miller says. “Science-based simulation in conjunction with targeted pilot plants can help validate the models along the way and enable larger scales to be reached more quickly.”
An inherent challenge in these simulations is portraying processes that occur at a wide range of scales. Equations that account for the motion of particles work at the micro- or nanoscale. But scientists also need to understand how fluids move through full-size devices and how those devices interact with one another and the rest of the plant. That complexity requires running repeated simulations at different scales to understand all of the phenomena and processes involved.
With models’ increasing role in designing and planning carbon capture systems, researchers also have to add a new computational component to their research. All simulations involve parameters that include a level of uncertainty. But testing information from these simulations in real life requires that scientists understand how accurate their models are. Including uncertainty quantification in the simulations allows researchers to understand how changes in parameters might affect those outcomes and to minimize uncertainty where possible.
Compression and delivery
Once carbon dioxide is captured and separated from other gases, it still must be compressed and safely stored underground to prevent its release into the atmosphere. That’s easier said than done. Compressing carbon dioxide by traditional methods can require massive, expensive equipment plus a large portion of the power plant’s output to drive it.
A new application first conceived for supersonic flight propulsion may reduce size and capital and operating costs. Ramgen Power Systems in Bellevue, Wash., has devised a stationary carbon dioxide compressor based on shock wave compression used in the inlets of supersonic aircraft engines.
The idea arose when DOE scientists wondered whether shock wave compression might provide an alternative to more costly solutions, says Jarlath Hume, Ramgen’s vice president for government relations. Initial analysis concluded the technology could make CO2 sequestration more affordable. With fewer stages, the compressor allows heat that’s generated to be productively used in the total carbon capture and sequestration system. The result is a smaller machine with less impact on power plant input than conventional systems with many more stages, says Allan Grosvenor, a Ramgen senior engineer.
Because it’s an innovative compression technology, engineers haven’t had years of experience to understand how such a system might behave. Instead of building and testing a huge number of prototypes, Ramgen researchers are doing much of that work computationally and then confirming simulation predictions with tests. They use Jaguar, the Cray XT5 at Oak Ridge National Laboratory’s Leadership Computing Facility, to model the complex aerodynamics and evaluate how different design options affect predicted performance. Their intensive simulations are providing valuable design information in preparation for tests on a 10,000 hp CO2 compressor at a plant in Olean, N.Y. Test data will validate the computer analysis and help researchers further refine the design.
Compressing and delivering carbon dioxide isn’t enough. Scientists also need to understand how it behaves underground. One idea is to inject the CO2 into large salt-water aquifers thousands of feet below Earth’s surface. But that process raises a host of questions. Researchers must predict and analyze chemical reactions that might occur as compressed carbon dioxide mixes with the brine. Pressure and heat generated might affect the underlying geology of rocks that hold it in place. Researchers also must understand how carbon dioxide leaks could form and how to prevent them.
Mary Wheeler, director of the Center for Subsurface Modeling at the University of Texas at Austin, has used computational modeling to understand the behavior of fluids in underground systems such as aquifers or petroleum and natural gas reservoirs. Now she’s focused that work on modeling many of these specific questions in carbon sequestration. The research is a component of another DOE EFRC, the Center for Frontiers in Subsurface Energy Security, which includes researchers from UT-Austin and Sandia National Laboratories.
The Gulf Coast Stacked Storage Project centers on a saline aquifer that spreads from Texas to the Florida Panhandle, with a primary research site in Cranfield, Miss. This location provides experimental data that Wheeler and colleagues use in their models.
As with carbon capture, carbon sequestration involves many processes, size scales from molecules to miles and time scales from seconds to centuries, Wheeler says. That complexity demands parallel computing resources. Because of the increasing reliance on simulations, Wheeler and her colleagues also are quantifying uncertainty so they can understand how to align their simulations as closely as possible with experimental data.
“We’re really trying to calibrate some of these models and be able to ascertain what physics do you really need to include in models,” Wheeler says. “It’s a huge computational problem.”
About Computing Sciences at Berkeley Lab
High performance computing plays a critical role in scientific discovery, and researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 13 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.