A key challenge for fusion power plants is finding a way to generate high fusion power while protecting the walls of the magnetic vessel from the intense heat and particle flow created by fusion reactions. Researchers at the DIII-D National Fusion Facility have been developing high-performance regimes for years, but new research has demonstrated a path toward practical methods of removing the excess heat and impurities that will be present in future power plants.
The experiments took a previously identified high-performance regime known as Super-H Mode and tested methods to control heat and particle flow at the edge of the fusion reaction. This new approach uses advanced control algorithms and optimized methods of cooling the edge of the plasma without excessively degrading the reaction in the core. The results identify a pathway for increased performance of the ITER fusion experiment under construction in France, as well as fusion power plants that will come after it.
“Practical fusion energy requires a means to maintain high fusion power while containing the reaction in a way that protects the equipment around it,” said Theresa Wilks of the Massachusetts Institute of Technology (MIT) Plasma Science and Fusion Center, who led the study. “These results offer a promising approach to getting there.”
DIII-D is the largest magnetic fusion research facility in the U.S. and is operated by General Atomics (GA) as a national user facility for the U.S. Department of Energy’s Office of Science. The heart of the facility is a tokamak that uses powerful electromagnets to produce a doughnut-shaped magnetic bottle for confining a fusion plasma at temperatures exceeding 100 million degrees. Inside a tokamak, matter transitions to a plasma state where electrons are stripped from their nuclei. Within a sufficiently hot plasma, atoms collide and fuse together, producing fusion energy in a manner similar to the sun. (See Fusion Energy 101 explainer below for more detail on how fusion works.)
Scientists have been working to develop more effective methods for confining plasmas at the temperatures and pressures necessary for cost-effective fusion energy. The theoretical model for Super-H Mode was developed several years ago by researchers at GA and the University of York and Culham Centre for Fusion Energy in the UK. It works by increasing temperature and pressure in the outer region of the plasma, called the pedestal. Higher pressures and temperatures at the pedestal lead to much higher fusion performance in the core.
However, the walls of the tokamak have to be protected from these intense temperatures, especially in an area of the tokamak known as the divertor, where excess heat and particles are removed during operation. Researchers have tried various methods of edge cooling in order to achieve what is referred to as a “detached” divertor, such as injecting gases into the plasma. Unfortunately, this cooling can transfer from the pedestal to the core, reducing fusion performance. The ideal regime would allow for a high-performance core and a fully detached divertor, but solutions have so far been elusive. Super-H mode may offer a path to such a solution.
“Because the Super-H mode regime is compatible with both a high-density and high-temperature pedestal and a high-density divertor,” Wilks said, “it gives us a great platform to study the impacts of a high-performance core on divertor conditions in current devices. And because the physics of Super-H mode are similar to what’s predicted to exist in larger devices, we can use simulations to scale this research to future reactors and power plants.”
The recent research, conducted by a multi-institutional team from MIT, GA, Princeton Plasma Physics Laboratory, Lawrence Livermore National Laboratory, Sandia National Laboratories, and the Culham Centre for Fusion Energy, explored ways in which divertor detachment could be achieved with Super-H mode. These included a mix of nitrogen injection and modifications to the shape of the plasma using the tokamak magnetic fields.
Read more here