Jun 202017
 

This futuristic drawing shows programmable nanophotonic processors integrated on a printed circuit board and carrying out deep learning computing.
Image: RedCube Inc., and courtesy of the researchers

Neural networks could be implemented more quickly using new photonic technology

“Deep learning” computer systems, based on artificial neural networks that mimic the way the brain learns from an accumulation of examples, have become a hot topic in computer science. In addition to enabling technologies such as face- and voice-recognition software, these systems could scour vast amounts of medical data to find patterns that could be useful diagnostically, or scan chemical formulas for possible new pharmaceuticals.

But the computations these systems must carry out are highly complex and demanding, even for the most powerful computers.

Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. Their results appear today in the journal Nature Photonics in a paper by MIT postdoc Yichen Shen, graduate student Nicholas Harris, professors Marin Solja?i? and Dirk Englund, and eight others.

Solja?i? says that many researchers over the years have made claims about optics-based computers, but that “people dramatically over-promised, and it backfired.” While many proposed uses of such photonic computers turned out not to be practical, a light-based neural-network system developed by this team “may be applicable for deep-learning for some applications,” he says.

Traditional computer architectures are not very efficient when it comes to the kinds of calculations needed for certain important neural-network tasks. Such tasks typically involve repeated multiplications of matrices, which can be very computationally intensive in conventional CPU or GPU chips.

After years of research, the MIT team has come up with a way of performing these operations optically instead. “This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Solja?i? says. “We’ve demonstrated the crucial building blocks but not yet the full system.”

By way of analogy, Solja?i? points out that even an ordinary eyeglass lens carries out a complex calculation (the so-called Fourier transform) on the light waves that pass through it. The way light beams carry out computations in the new photonic chips is far more general but has a similar underlying principle. The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.

The result, Shen says, is that the optical chips using this architecture could, in principle, carry out calculations performed in typical artificial intelligence algorithms much faster and using less than one-thousandth as much energy per operation as conventional electronic chips. “The natural advantage of using light to do matrix multiplication plays a big part in the speed up and power savings, because dense matrix multiplications are the most power hungry and time consuming part in AI algorithms” he says.

The new programmable nanophotonic processor, which was developed in the Englund lab by Harris and collaborators, uses an array of waveguides that are interconnected in a way that can be modified as needed, programming that set of beams for a specific computation. “You can program in any matrix operation,” Harris says. The processor guides light through a series of coupled photonic waveguides. The team’s full proposal calls for interleaved layers of devices that apply an operation called a nonlinear activation function, in analogy with the operation of neurons in the brain.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with this rudimentary system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, Solja?i? says.

Englund adds that the programmable nanophotonic processor could have other applications as well, including signal processing for data transmission. “High-speed analog signal processing is something this could manage” faster than other approaches that first convert the signal to digital form, since light is an inherently analog medium. “This approach could do processing directly in the analog domain,” he says.

The team says it will still take a lot more effort and time to make this system useful; however, once the system is scaled up and fully functioning, it can find many user cases, such as data centers or security systems. The system could also be a boon for self-driving cars or drones, says Harris, or “whenever you need to do a lot of computation but you don’t have a lot of power or time.”

Learn more: New system allows optical “deep learning”

 

The Latest on: Neural networks
  • New Way to Write Magnetic Info Could Pave the Way for Hardware Neural Networks
    on November 21, 2017 at 7:08 am

    Researchers have shown how to write any magnetic pattern desired onto nanowires, which could help computers mimic how the brain processes information. Much current computer hardware, such as hard drives, use magnetic memory devices. These rely on magnetic ... […]

  • Neural Network Ruins Disney Songs Forever
    on November 21, 2017 at 7:04 am

    In humans, short-term memory allows unrehearsed recall for a period of several seconds to a minute. In neural networks, it introduces hilarious effects. In her latest AI experiment, research scientist Janelle Shane adjusted the length of her computer’s ... […]

  • Estimating an Optimal Learning Rate For a Deep Neural Network
    on November 21, 2017 at 6:34 am

    The learning rate is one of the most important hyper-parameters to tune for training deep neural networks. Deep learning models are typically trained by a stochastic gradient descent optimizer. There are many variations of stochastic gradient descent: Adam ... […]

  • Exotic magnetic states facilitate hardware neural networks
    on November 21, 2017 at 5:01 am

    Researchers at Imperial College London have made a breakthrough that could facilitate hardware neural networks that work similarly to the human brain. Current computer hardware often relies upon magnetic memory devices but it uses a very basic approach to ... […]

  • Common Neural Network Activation Functions
    on November 20, 2017 at 12:00 am

    In the previous article, I was talking about what Neural Networks are and how they are trying to imitate biological neural system. Also, the structure of the neuron, smallest building unit of these networks, was presented. Neurons have this simple ... […]

  • Imagination Technologies: Enabling Efficient Implementation Of Neural Networks In Mobile Devices
    on November 19, 2017 at 1:10 pm

    In the US alone, 73% of smart phone users take pictures, according to Pew Research. It is therefore no surprise that people involved in neural networks (NN) consider photo management to be the “killer app” for NN’s on mobile devices. Imagination ... […]

  • Arduino Neural Network Robot
    on November 16, 2017 at 12:00 am

    This instructable is based on a 3 Part series I made for the Make YouTube Channel which shows you exactly how to prototype, design, assemble, and program, your own Arduino neural network robot. After watching the full series, you should have a better grasp ... […]

  • What is a Neural Network? Introduction to Neural Networks Part I
    on November 15, 2017 at 8:12 am

    We want to explore machine learning on a deeper level by discussing neural networks. We will do that by explaining how you can use Tensor Flow to recognize handwriting. But to do that we first must understand what are neural networks. We begin our ... […]

  • How neural networks will transform e-commerce
    on November 14, 2017 at 4:13 am

    E-commerce is a multi-trillion-dollar industry undergoing a big transformation – one that will permanently change the way we sell and the way consumers will buy. Consider this. You are shopping for new shirts to match your suite online. You take a quick ... […]

  • Scientists create a prototype neural network based on memristors
    on November 14, 2017 at 12:00 am

    The memristive chip in its casing, housed in a standard contacting device (for testing the parameters of memristive nanostructures). Credit: Elena Emel'janova Lobachevsky University scientists under the supervision of Alexey Mikhailov, head of the UNN PTRI ... […]

via Google News and Bing News

Other Interesting Posts

Leave a Reply

%d bloggers like this: