Before scientists can effectively capture and deploy fusion energy, they must learn to predict major disruptions that can halt fusion reactions and damage the walls of doughnut-shaped fusion devices called tokamaks. Timely prediction of disruptions, the sudden loss of control of the hot, charged plasma that fuels the reactions, will be vital to triggering steps to avoid or mitigate such large-scale events.
Today, researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University are employing artificial intelligence to improve predictive capability. Researchers led by William Tang, a PPPL physicist and a lecturer with the rank and title of professor at Princeton University, are developing the code for predictions for ITER, the international experiment under construction in France to demonstrate the practicality of fusion energy.
Form of “deep learning”
The new predictive software, called the Fusion Recurrent Neural Network (FRNN) code, is a form of “deep learning” — a newer and more powerful version of modern machine- learning software, an application of artificial intelligence. “Deep learning represents an exciting new avenue toward the prediction of disruptions,” Tang said. “This capability can now handle multi-dimensional data.”
FRNN is a deep-learning architecture that has proven to be the best way to analyze sequential data with long-range patterns. Members of the PPPL and Princeton University machine-learning team are the first to systematically apply a deep learning approach to the problem of disruption forecasting in tokamak fusion plasmas.
Chief architect of FRNN is Julian Kates-Harbeck, a graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow. Drawing upon expertise gained while earning a master’s degree in computer science at Stanford University, he has led the building of the FRNN software.
More accurate predictions
Using this approach, the team has demonstrated the ability to predict disruptive events more accurately than previous methods have done. By drawing from the huge data base at the Joint European Torus (JET) facility located in the United Kingdom — the largest and most powerful tokamak in operation — the researchers have significantly improved upon predictions of disruptions and reduced the number of false positive alarms. EUROfusion, the European Consortium for the Development of Fusion Energy, manages JET research.
The team now aims to reach the challenging goals that ITER will require. These include producing 95 percent correct predictions when disruptions occur, while providing fewer than 3 percent false alarms when there are no disruptions. “On the test data sets examined, the FRNN has improved the curve for predicting true positives while reducing false positives,” said Eliot Feibush, a computational scientist at PPPL, referring to what is called the “Receiver Operating Characteristic” curve that is commonly used to measure machine learning accuracy. “We are working on bringing in more training data to do even better.”
The process is highly demanding. “Training deep neural networks is a computationally intensive task that requires engagement of high-performance computing hardware,” said Alexey Svyatkovskiy, a Princeton University big data researcher. “That is why a large part of what we do is developing and distributing new algorithms across many processors to achieve highly efficient parallel computing. Such computing will handle the increasing size of problems drawn from the disruption-relevant data base from JET and other tokamaks.”
The deep learning code runs on graphic processing units (GPUs) that can compute thousands of copies of a program at once, far more than older central processing units (CPUs). Tests performed on modern GPU clusters, and on world-class machines such as Titan, currently the fastest and most powerful U.S. supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at Oak Ridge National Laboratory, have demonstrated excellent linear scaling. Such scaling reduces the computational run time in direct proportion to the number of GPUs used — a major requirement for efficient parallel processing.
Princeton’s Tiger cluster
Princeton University’s Tiger cluster of modern GPUs was the first to conduct deep learning tests, using FRNN to demonstrate the improved ability to predict fusion disruptions. The code has since run on Titan and other leading supercomputing GPU clusters in the United States, Europe and Asia, and have continued to show excellent scaling with the number of GPUs engaged.
Going forward, the researchers seek to demonstrate that this powerful predictive software can run on tokamaks around the world and eventually on ITER. Also planned is enhancement of the speed of disruption analysis for the increasing problem sizes associated with the larger data sets prior to the onset of a disruptive event. Support for this project has primarily come to date from the Laboratory Directed Research and Development funds provided by PPPL.
The Latest on: Fusion Recurrent Neural Network
- Discriminating pseudoprogression and true progression in diffuse infiltrating glioma using multi-parametric MRI data through deep learningon November 23, 2020 at 11:49 am
Differentiating pseudoprogression from true tumor progression has become a significant challenge in follow-up of diffuse infiltrating gliomas, particularly high grade, which leads to a potential ...
- IIT Ropar and NPTEL invite applications for Free Online Course on Deep Learningon November 19, 2020 at 8:06 pm
convolutional neural networks, recurrent neural networks and attention mechanisms. Students will also look at various optimization algorithms such as Gradient Descent, Nesterov Accelerated ...
- NTRK3 Fusionon November 9, 2020 at 11:46 am
fusion. NTRKs are a family of receptors known to be involved in the development, differentiation, and metabolism of neural and other tissues and are found at high frequencies in rare cancer types.
- R2Deep: Recharging Recommendation System for Electric Taxis based on Deep Learningon November 5, 2020 at 4:00 pm
The project aims to: analyze the existing eTaxi GPS trajectory data and convert them into information on the grid maps which will then be directly fed into the deep learning models; utilize deep ...
- China: Applying Neural-Network Machine Learning to Additive Manufacturing Processeson October 21, 2020 at 5:00 pm
A curated collection of industry and product deep-dives. This time, Max and I got to speak with 3D Systems’ CEO Jeff Graves. Graves has spent over 17 years in corporate leadership roles and now ...
- Variational Inference for Agent-Based Models with Applications to Achieve Fuel Economyon August 14, 2020 at 5:39 am
Many existing algorithms to track and predict real-time traffic dynamics — vector ARIMA, state space, neural network, and Bayesian network models — have difficulties in coping with noisy and missing ...
- Digital Signal Processor (DSP)on August 13, 2020 at 3:22 pm
Some DSPs are being used for neural networks processing. Cadence adapted a DPS for vision, radar/lidar, and fused-sensor applications for neural network processing (Tensilica Vision C5 DSP). The DSPs ...
- Fusion Recurrent Neural Network (RNN) Acceleratoron August 5, 2020 at 9:33 pm
The NEUCHIPS RNNAccel-200/100 is a Deep Learning Accelerator IP which empower neural network inference for SoC/MCU/DSP. It is designed especially for ultra-low power applications and targeted toward ...
- Neural network accelerator IP Listingon August 7, 2018 at 8:19 am
The PowerVR AX2185 is the highest performing neural network accelerator per mm2 in the market. Featuring eight full-width compute engines the AX2185 delivers up to 4.1 Tera Operations Per Second ...
- American Journal of Respiratory and Critical Care Medicineon May 27, 2018 at 11:41 am
The G protein mediates attachment of the virus to host cells, while the F (fusion) protein mediates viral penetration ... is the risk of persistent, recurrent wheezing, especially during the first ...
via Google News and Bing News