Trending News
Home / International Defence Security and Technology / Technology / Energy / New Fusion Reactor technology breakthroughs, bringing the dream of unlimited source of clean, cheap energy to reality

New Fusion Reactor technology breakthroughs, bringing the dream of unlimited source of clean, cheap energy to reality

Researchers from US, Europe, Russia, China, Germany and Japan are striving towards harnessing immense energy of nuclear fusion, the process that powers the Sun and produces 10 thousand times more energy than coal. The idea is to recreate the nuclear fusion that occurs in stars, where atomic nuclei collide and fuse together to form helium atoms, releasing huge amount of energy in the process.

 

Many energy experts believe that nuclear fusion is the only real ‘solution’ to global warming that is capable of producing unlimited supplies of cheap, clean, safe and sustainable electricity. The reactor’s fuel is limitless, hydrogen the element used to create the fusion reaction is the most abundant atom in the universe and could be sourced from seawater, and the lithium found in the Earth’s crust. Fusion reactors are also safe (they produce less radiation than we live with every day), clean (there’s no combustion, so there’s no pollution) and will create less waste than fission reactors.

 

However, the thermonuclear fusion presents so far insurmountable scientific and engineering challenges.” In the Sun, massive gravitational forces create the right conditions for fusion, but on Earth they are much harder to achieve. Fusion fuel – different isotopes of hydrogen – must be heated to extreme temperatures of the order of 100 million degrees Celsius, and must be kept dense enough, and confined for long enough, to allow the nuclei to fuse,”explain World Nuclear Association. The aim of the controlled fusion research program is to achieve ‘ignition’, which occurs when enough fusion reactions take place for the process to become self-sustaining, with fresh fuel then being added to continue it. Once ignition is achieved, there is net energy yield – about four times as much as with nuclear fission.

 

“Fusion is an expensive science, because you’re trying to build a sun in a bottle,” said Michael Williams of National Spherical Torus Experiment, and “The true pioneers in the field didn’t fully appreciate how hard a scientific problem it would be.” The necessary materials are either too expensive or simply do not exist. 

 Fusion technology

With current technology, the reaction most readily feasible is between the nuclei of the two heavy forms (isotopes) of hydrogen – deuterium (D) and tritium (T). Each D-T fusion event releases 17.6 MeV (2.8 x 10-12 joule, compared with 200 MeV for a U-235 fission and 3-4 MeV for D-D fusion). On a mass basis, the D-T fusion reaction releases over four times as much energy as uranium fission.

 

“In a fusion reactor, the concept is that neutrons generated from the D-T fusion reaction will be absorbed in a blanket containing lithium which surrounds the core. The lithium is then transformed into tritium (which is used to fuel the reactor) and helium. The blanket must be thick enough (about 1 metre) to slow down the high-energy (14 MeV) neutrons,” explain World Nuclear Association. The kinetic energy of the neutrons is absorbed by the blanket, causing it to heat up.  The heat energy is collected by the coolant (water, helium or Li-Pb eutectic) flowing through the blanket and, in a fusion power plant, this energy will be used to generate electricity by conventional methods.

 

Deuterium occurs naturally in seawater (30 grams per cubic metre), which makes it very abundant relative to other energy resources. Tritium occurs naturally only in trace quantities (produced by cosmic rays) and is radioactive, with a half-life of around 12 years. Usable quantities can be made in a conventional nuclear reactor, or in the present context, bred in a fusion system from lithium.  Lithium is found in large quantities (30 parts per million) in the Earth’s crust and in weaker concentrations in the sea.

 

There are two leading methods being used today to produce nuclear reactions, with lasers and with magnets. Inertial confinement fusion (ICF) is a type of fusion energy research that attempts to initiate nuclear fusion reactions by heating and compressing a fuel target, typically in the form of a pellet that most often contains a mixture of deuterium and tritium. Lasers squeezes hydrogen atoms together to the point that they fuse with each other to create helium – this is the same nuclear fusion process that occurs in the center of the sun. This type of nuclear reaction is produced in large containment vessels called tokamaks.

 

In magnetic confinement fusion (MCF), hundreds of cubic metres of D-T plasma at a density of less than a milligram per cubic metre are confined by a magnetic field at a few atmospheres pressure and heated to fusion temperature. Magnetic fields are ideal for confining a plasma because the electrical charges on the separated ions and electrons mean that they follow the magnetic field lines.The aim is to prevent the particles from coming into contact with the reactor walls as this will dissipate their heat and slow them down. The most effective magnetic configuration is toroidal, shaped like a doughnut, in which the magnetic field is curved around to form a closed loop.

 

While magnetic confinement seeks to extend the time that ions spend close to each other in order to facilitate fusion, the inertial confinement strategy seeks to fuse nuclei so fast that they don’t have time to move apart.

 

The aim of the controlled fusion research program is to achieve ‘ignition’, which occurs when enough fusion reactions take place for the process to become self-sustaining, with fresh fuel then being added to continue it. Once ignition is achieved, there is net energy yield – about four times as much as with nuclear fission.

 

Artificial intelligence speeds efforts to develop clean, virtually limitless fusion energy

Scientists at U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University, where a team of scientists working with a Harvard graduate student is for the first time applying deep learning—a powerful new version of the machine learning form of AI—to forecast sudden disruptions that can halt fusion reactions and damage the doughnut-shaped tokamaks that house the reactions.

 

“This research opens a promising new chapter in the effort to bring unlimited energy to Earth,” Steve Cowley, director of PPPL, said of the findings, which are reported in the current issue of Nature magazine. “Artificial intelligence is exploding across the sciences and now it’s beginning to contribute to the worldwide quest for fusion power.”

 

Crucial to demonstrating the ability of deep learning to forecast disruptions—the sudden loss of confinement of plasma particles and energy—has been access to huge databases provided by two major fusion facilities: the DIII-D National Fusion Facility that General Atomics operates for the DOE in California, the largest facility in the United States, and the Joint European Torus (JET) in the United Kingdom, the largest facility in the world, which is managed by EUROfusion, the European Consortium for the Development of Fusion Energy. Support from scientists at JET and DIII-D has been essential for this work.

 

The vast databases have enabled reliable predictions of disruptions on tokamaks other than those on which the system was trained—in this case from the smaller DIII-D to the larger JET. The achievement bodes well for the prediction of disruptions on ITER, a far larger and more powerful tokamak that will have to apply capabilities learned on today’s fusion facilities. The deep learning code, called the Fusion Recurrent Neural Network (FRNN), also opens possible pathways for controlling as well as predicting disruptions.

 

“Artificial intelligence is the most intriguing area of scientific growth right now, and to marry it to fusion science is very exciting,” said Bill Tang, a principal research physicist at PPPL, coauthor of the paper and lecturer with the rank and title of professor in the Princeton University Department of Astrophysical Sciences who supervises the AI project. “We’ve accelerated the ability to predict with high accuracy the most dangerous challenge to clean fusion energy.”

Unlike traditional software, which carries out prescribed instructions, deep learning learns from its mistakes. Accomplishing this seeming magic are neural networks, layers of interconnected nodes—mathematical algorithms—that are “parameterized,” or weighted by the program to shape the desired output. For any given input the nodes seek to produce a specified output, such as correct identification of a face or accurate forecasts of a disruption. Training kicks in when a node fails to achieve this task: the weights automatically adjust themselves for fresh data until the correct output is obtained.

 

A key feature of deep learning is its ability to capture high-dimensional rather than one-dimensional data. For example, while non-deep learning software might consider the temperature of a plasma at a single point in time, the FRNN considers profiles of the temperature developing in time and space. “The ability of deep learning methods to learn from such complex data make them an ideal candidate for the task of disruption prediction,” said collaborator Julian Kates-Harbeck, a physics graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow who was lead author of the Nature paper and chief architect of the code.

 

Training and running neural networks relies on graphics processing units (GPUs), computer chips first designed to render 3-D images. Such chips are ideally suited for running deep learning applications and are widely used by companies to produce AI capabilities such as understanding spoken language and observing road conditions by self-driving cars.

 

Kates-Harbeck trained the FRNN code on more than two terabytes (1012) of data collected from JET and DIII-D. After running the software on Princeton University’s Tiger cluster of modern GPUs, the team placed it on Titan, a supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility, and other high-performance machines.

 

A demanding task

Distributing the network across many computers was a demanding task. “Training deep neural networks is a computationally intensive problem that requires the engagement of high-performance computing clusters,” said Alexey Svyatkovskiy, a coauthor of the Nature paper who helped convert the algorithms into a production code and now is at Microsoft. “We put a copy of our entire neural network across many processors to achieve highly efficient parallel processing,” he said.

 

The software further demonstrated its ability to predict true disruptions within the 30-millisecond time frame that ITER will require, while reducing the number of false alarms. The code now is closing in on the ITER requirement of 95 percent correct predictions with fewer than 3 percent false alarms. While the researchers say that only live experimental operation can demonstrate the merits of any predictive method, their paper notes that the large archival databases used in the predictions, “cover a wide range of operational scenarios and thus provide significant evidence as to the relative strengths of the methods considered in this paper.”

 

From prediction to control

The next step will be to move from prediction to the control of disruptions. “Rather than predicting disruptions at the last moment and then mitigating them, we would ideally use future deep learning models to gently steer the plasma away from regions of instability with the goal of avoiding most disruptions in the first place,” Kates-Harbeck said. Highlighting this next step is Michael Zarnstorff, who recently moved from deputy director for research at PPPL to chief science officer for the laboratory. “Control will be essential for post-ITER tokamaks—in which disruption avoidance will be an essential requirement,” Zarnstorff noted.

 

Progressing from AI-enabled accurate predictions to realistic plasma control will require more than one discipline. “We will combine deep learning with basic, first-principle physics on high-performance computers to zero in on realistic control mechanisms in burning plasmas,” said Tang. “By control, one means knowing which ‘knobs to turn’ on a tokamak to change conditions to prevent disruptions. That’s in our sights and it’s where we are heading.”

 

Lasers could heat materials to temperatures hotter than the center of the Sun in only 20 quadrillionths of a second, according to new research.

Laser fusion attempts to force nuclear fusion in tiny pellets or microballoons of a deuterium-tritium mixture by zapping them with such a high energy density that they will fuse before they have time to move away from each other. This is an example of inertial confinement.

 

Theoretical physicists from Imperial College London have devised an extremely rapid heating mechanism that they believe could heat certain materials to ten million degrees in much less than a million millionth of a second. The heating would be about 100 times faster than rates currently seen in fusion experiments using the world’s most energetic laser system at the Lawrence Livermore National Laboratory in California. The race is now on for fellow scientists to put the team’s method into practice.

 

New, powerful magnets key to building the world’s first energy-producing fusion experiment

The dream of nuclear fusion is on the brink of being realised, according to a major new US initiative that says it will put fusion power on the grid within 15 years. The problem is that until now every fusion experiment has operated on an energy deficit, making it useless as a form of electricity generation. However, this process produces net energy only at extreme temperatures of hundreds of millions of degrees celsius – hotter than the centre of the sun and far too hot for any solid material to withstand.

 

The project, a collaboration between scientists at MIT and a private company, will take a radically different approach to other efforts to transform fusion from an expensive science experiment into a viable commercial energy source. The team intend to use a new class of high-temperature superconductors they predict will allow them to create the world’s first fusion reactor that produces more energy than needs to be put in to get the fusion reaction going.

 

One potential solution to this could be increasing the strength of the magnets. Magnetic fields in fusion devices serve to keep these hot ionized gases, called plasmas, isolated and insulated from ordinary matter. The quality of this insulation gets more effective as the field gets stronger, meaning that one needs less space to keep the plasma hot. Doubling the magnetic field in a fusion device allows one to reduce its volume — a good indicator of how much the device costs — by a factor of eight, while achieving the same performance. Thus, stronger magnetic fields make fusion smaller, faster and cheaper. And this potentially reduces the amount of energy that needs to be put in to get the fusion reaction off the ground.

 

Superconductors are materials that allow currents to pass through them without losing energy, but to do so they must be very cold. New superconducting compounds, however, can operate at much higher temperatures than conventional superconductors. Critical for fusion, these superconductors function even when placed in very strong magnetic fields. While originally in a form not useful for building magnets, researchers have now found ways to manufacture high-temperature superconductors in the form of “tapes” or “ribbons” that make magnets with unprecedented performance. A newly available superconducting material – a steel tape coated with a compound called yttrium-barium-copper oxide, or YBCO – has allowed scientists to produce smaller, more powerful magnets.

 

The design of these magnets is not suited for fusion machines because they are much too small. Before the new fusion device, called SPARC, can be built, the new superconductors must be incorporated into the kind of large, strong magnets needed for fusion. Once the magnet development is successful, the next step will be to construct and operate the SPARC fusion experiment. SPARC will be a tokamak fusion device, a type of magnetic confinement configuration similar to many machines already in operation.

 

MIT researchers have found a way for nuclear fusion reactors to shed excess heat, one of the biggest hurdles to making them work.

Researchers at MIT has helped find an answer to one of the biggest problems posed by the technology: how to shed excess heat. The team’s design is unlike that of typical fusion plants, making it possible to open the device’s internal chamber and replace critical components. This is essential for the newly proposed heat-draining mechanism when temperatures inside the chamber reach millions of degrees Celsius.

 

Publishing their findings to the journal Fusion Engineering and Design, the researchers said the way the design sheds heat is similar to the exhaust in a car. In the new design, the ‘exhaust pipe’ is much longer and wider than is possible in any of today’s versions. This means that while it is much better at shedding heat, the engineering needed to make that possible required dozens of design alternatives.

 

After much trial and error, the team eventually created a design known as the ARC, standing for advanced, robust and compact. Featuring magnets built in sections for easy removal, it is possible to access the entire interior of the chamber and place the secondary magnets inside its main coils rather than outside.

 

In conventional fusion reactor designs, the secondary magnetic coils (which shape the plasma) lie outside the primary ones, because there is simply no way to put these coils inside the solid primary coils. That means the secondary coils need to be large and powerful, to make their fields penetrate the chamber. As a result, they are not very precise in how they control the plasma shape.

 

Described by the director of MIT’s Plasma Science and Fusion Center, Prof Dennis Whyte, as a “really exciting” and “revolutionary” design, the exhaust concept brings us one step closer to achieving a working fusion reactor. If achieved, it would allow for engineers to produce clean, abundant energy using a fuel derived from seawater called deuterium. “This is opening up new paths in thinking about divertors and heat management in a fusion device,” Whyte said.

 

System monitors radiation damage to materials in real-time

Researchers at MIT and Sandia National Laboratories have developed, tested, and made available a new system that can monitor radiation-induced changes continuously, providing more useful data much faster than traditional methods. With many nuclear plants nearing the end of their operational lifetimes under current regulations, knowing the condition of materials inside them can be critical to understanding whether their operation can be safely extended, and if so by how much.

 

The new laser-based system can be used to observe changes to the physical properties of the materials, such as their elasticity and thermal diffusivity, without destroying or altering them, the researchers say. The findings are described in the journal Nuclear Instruments and Methods in Physics Research Section B in a paper by MIT doctoral student Cody A. Dennett, professor of nuclear science and engineering Michael P. Short, and technologist Daniel L. Buller and scientist Khalid Hattar from Sandia.

 

The new system, based on a technology called transient grating spectroscopy, uses laser beams to probe minute changes at a material’s surface that can reveal details about changes in the structure of the material’s interior. Two years ago, Dennett and Short adapted the approach to monitor radiation effects. Now, after extensive testing, the system is ready for use by researchers exploring the development of new materials for next-generation reactors, or those looking to extend the lives of existing reactors through a better understanding how materials degrade over time under the harsh radiation environment inside reactor vessels.

 

 

To simulate the effects of neutron bombardment — the type of radiation that causes most of the material degradation in a reactor environment — researchers commonly use ion beams, which produce a similar kind of damage but are much easier to control and safer to work with. The team used a 6-megavolt ion accelerator facility at Sandia as the basis for the new system. These types of facilities accelerate testing because they can simulate years of operational neutron exposure in just a few hours.

 

By using the real-time monitoring ability of this system, Dennett says, it’s possible to pinpoint the time when the physical changes to the material start to accelerate, which tends to happen fairly suddenly and progress rapidly. By stopping the experiment just at that point, it’s then possible to study in detail what happens at this critical moment. “This allows us to target what the mechanistic reasons behind these structural changes are,” he says.

 

Short says the system could perform detailed studies of the performance of a given material in a matter of hours, whereas it might otherwise take months just to get through the first iteration of finding the point when degradation sets in. For a complete characterization, conventional methods “might be taking half a year, versus a day” using the new system, he says.

 

In their tests of the system, the team used two pure metals — nickel and tungsten — but the facility can be used to test all sorts of alloys as well as pure metals, and could also test many other kinds of materials, the researchers say. “One of the reasons we’re so excited here,” Dennett says, is that when they have described this method at scientific conferences, “everybody we’ve talked to says ‘can you try it on my material?’ Everybody has an idea of what will happen if they can test their own thing, and then they can move much faster in their research.”

 

The actual measurements made by the system, which stimulates vibrations in the material using a laser beam and then uses a second laser to observe those vibrations at the surface, directly probe the elastic stiffness and thermal properties of the material, Dennett explains. But that measurement can then be used to extrapolate other related characteristics, including defect and damage accumulation, he says. “It’s what they tell you about the underlying mechanisms” that’s most significant.

 

The unique facility, now in operation at Sandia, is also the subject of ongoing work by the team to further improve it capabilities, Dennett says. “It’s very improvable,” he says, adding that they hope to add more different diagnostic tools to probe more properties of materials during irradiation. The work is “a clever engineering approach that will allow researchers to characterize the response of a variety of materials to irradiation damage,” says Laurence J. Jacobs,

 

 

References and Resources also include:

http://www.sciencemag.org/news/2016/06/giant-us-fusion-laser-might-never-achieve-goal-report-concludes

https://www.sciencedaily.com/releases/2018/11/181105105424.htm

https://www.siliconrepublic.com/machines/nuclear-fusion-breakthrough-excess-heat

http://news.mit.edu/2018/system-monitors-radiation-damage-materials-1218

https://phys.org/news/2019-04-artificial-intelligence-efforts-virtually-limitless.html?utm_source=nwletter&utm_medium=email&utm_campaign=weekly-nwletter

 

image_pdfimage_print

Check Also

Russia beats US & China by launching World’s first floating Nuclear Reactor, have potential to power military forward operating bases, disputed islands and Arctic

Russia has launched the world’s only floating nuclear reactor, beating countries like the US and …

Leave a Reply

Your email address will not be published. Required fields are marked *


error: Content is protected !!