Neural networks could be implemented more quickly using new photonic technology
“Deep learning” computer systems, based on artificial neural networks that mimic the way the brain learns from an accumulation of examples, have become a hot topic in computer science. In addition to enabling technologies such as face- and voice-recognition software, these systems could scour vast amounts of medical data to find patterns that could be useful diagnostically, or scan chemical formulas for possible new pharmaceuticals.
But the computations these systems must carry out are highly complex and demanding, even for the most powerful computers.
Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. Their results appear today in the journal Nature Photonics in a paper by MIT postdoc Yichen Shen, graduate student Nicholas Harris, professors Marin Solja?i? and Dirk Englund, and eight others.
Solja?i? says that many researchers over the years have made claims about optics-based computers, but that “people dramatically over-promised, and it backfired.” While many proposed uses of such photonic computers turned out not to be practical, a light-based neural-network system developed by this team “may be applicable for deep-learning for some applications,” he says.
Traditional computer architectures are not very efficient when it comes to the kinds of calculations needed for certain important neural-network tasks. Such tasks typically involve repeated multiplications of matrices, which can be very computationally intensive in conventional CPU or GPU chips.
After years of research, the MIT team has come up with a way of performing these operations optically instead. “This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Solja?i? says. “We’ve demonstrated the crucial building blocks but not yet the full system.”
By way of analogy, Solja?i? points out that even an ordinary eyeglass lens carries out a complex calculation (the so-called Fourier transform) on the light waves that pass through it. The way light beams carry out computations in the new photonic chips is far more general but has a similar underlying principle. The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.
The result, Shen says, is that the optical chips using this architecture could, in principle, carry out calculations performed in typical artificial intelligence algorithms much faster and using less than one-thousandth as much energy per operation as conventional electronic chips. “The natural advantage of using light to do matrix multiplication plays a big part in the speed up and power savings, because dense matrix multiplications are the most power hungry and time consuming part in AI algorithms” he says.
The new programmable nanophotonic processor, which was developed in the Englund lab by Harris and collaborators, uses an array of waveguides that are interconnected in a way that can be modified as needed, programming that set of beams for a specific computation. “You can program in any matrix operation,” Harris says. The processor guides light through a series of coupled photonic waveguides. The team’s full proposal calls for interleaved layers of devices that apply an operation called a nonlinear activation function, in analogy with the operation of neurons in the brain.
To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with this rudimentary system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, Solja?i? says.
Englund adds that the programmable nanophotonic processor could have other applications as well, including signal processing for data transmission. “High-speed analog signal processing is something this could manage” faster than other approaches that first convert the signal to digital form, since light is an inherently analog medium. “This approach could do processing directly in the analog domain,” he says.
The team says it will still take a lot more effort and time to make this system useful; however, once the system is scaled up and fully functioning, it can find many user cases, such as data centers or security systems. The system could also be a boon for self-driving cars or drones, says Harris, or “whenever you need to do a lot of computation but you don’t have a lot of power or time.”
Learn more: New system allows optical “deep learning”
The Latest on: Neural networks
Video: Addressing Key Science Challenges with Adversarial Neural Networks
on April 18, 2018 at 8:56 am
In this video from the 2018 HPC User Forum in Tucson, Wahid Bhimji from NERSC presents: Addressing Key Science Challenges with Adversarial Neural Networks. “Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as ... […]
IBM’s new AI toolbox puts your deep learning network to the test
on April 17, 2018 at 3:06 am
The open-source kit contains everything a machine learning programmer needs to attack their own deep learning neural networks (DNN) to ensure they’re able to withstand real-world conditions. The toolbox comes in the form of a code library which includes ... […]
Real-World Applications of Artificial Neural Networks
on April 16, 2018 at 7:30 am
In practice, their categorization is ambiguous since many financial and predictive applications involve pattern classification. A preferred classification that separates applications by method is the following. Classification problems involve either binary ... […]
Google Neural Network Can Isolate Individual Voices in Videos
on April 13, 2018 at 1:19 pm
The bleeding edge of computer science these days is all about making computers more like humans. We’re using neural networks to help machines recognize objects, play games, and even speak in a more realistic way. In a new feat of machine learning magic ... […]
Glucose metabolism is key to neural network development, finds study
on April 13, 2018 at 5:45 am
Neurons are cells that transmit nervous stimuli. Waves of excitation are transmitted through dendrites and synapses along a network of neural cells. Neural network formation – a phenomenon that influences an individual’s cognitive ability – is ... […]
This neural network examines neurons. Like, the kind in your brain.
on April 12, 2018 at 1:10 pm
As Ernest Rutherford once said, “All science is either physics or stamp collecting.” Well, today’s scientists can feel fortunate that AI is, more and more, being used to keep track of the postage. A new deep-learning system that peers at brain tissue ... […]
Applied Neural Nets in Virtual Reality
on April 10, 2018 at 9:30 am
Ours will be: Create a Unity VR drawing app with Leap Motion hand-tracking. Train a neural network to accurately classify shapes drawn in that app. This will produce a novel XR input or game mechanic, and we’ll learn about the intersection of ML and ... […]
Interpreting The Predictions Of Deep Neural Networks
on April 10, 2018 at 8:06 am
CERN has equipped itself with an inter-experimental working group on Machine Learning since a couple of years. Besides organizing monthly meetings and other activities fostering the dissemination of knowledge and active research on the topic, the group ... […]
Decrease DLL Neural Network Compilation Time With C++17
on April 9, 2018 at 2:30 am
Just last week, I've migrated my Expression Templates Library (ETL) library to C++17. It is now also done in my Deep Learning Library (DLL) library. In ETL, this resulted in much nicer code overall, but no real improvement in compilation time. The ... […]
Tiny Neural Network Library in 200 Lines of Code
on April 8, 2018 at 10:11 pm
Neural networks have gone mainstream with a lot of heavy-duty — and heavy-weight — tools and libraries. What if you want to fit a network into a little computer? There’s tinn — the tiny neural network. If you can compile 200 lines of standard C ... […]
via Google News and Bing News