Jun 202017

This futuristic drawing shows programmable nanophotonic processors integrated on a printed circuit board and carrying out deep learning computing.
Image: RedCube Inc., and courtesy of the researchers

Neural networks could be implemented more quickly using new photonic technology

“Deep learning” computer systems, based on artificial neural networks that mimic the way the brain learns from an accumulation of examples, have become a hot topic in computer science. In addition to enabling technologies such as face- and voice-recognition software, these systems could scour vast amounts of medical data to find patterns that could be useful diagnostically, or scan chemical formulas for possible new pharmaceuticals.

But the computations these systems must carry out are highly complex and demanding, even for the most powerful computers.

Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. Their results appear today in the journal Nature Photonics in a paper by MIT postdoc Yichen Shen, graduate student Nicholas Harris, professors Marin Solja?i? and Dirk Englund, and eight others.

Solja?i? says that many researchers over the years have made claims about optics-based computers, but that “people dramatically over-promised, and it backfired.” While many proposed uses of such photonic computers turned out not to be practical, a light-based neural-network system developed by this team “may be applicable for deep-learning for some applications,” he says.

Traditional computer architectures are not very efficient when it comes to the kinds of calculations needed for certain important neural-network tasks. Such tasks typically involve repeated multiplications of matrices, which can be very computationally intensive in conventional CPU or GPU chips.

After years of research, the MIT team has come up with a way of performing these operations optically instead. “This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Solja?i? says. “We’ve demonstrated the crucial building blocks but not yet the full system.”

By way of analogy, Solja?i? points out that even an ordinary eyeglass lens carries out a complex calculation (the so-called Fourier transform) on the light waves that pass through it. The way light beams carry out computations in the new photonic chips is far more general but has a similar underlying principle. The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.

The result, Shen says, is that the optical chips using this architecture could, in principle, carry out calculations performed in typical artificial intelligence algorithms much faster and using less than one-thousandth as much energy per operation as conventional electronic chips. “The natural advantage of using light to do matrix multiplication plays a big part in the speed up and power savings, because dense matrix multiplications are the most power hungry and time consuming part in AI algorithms” he says.

The new programmable nanophotonic processor, which was developed in the Englund lab by Harris and collaborators, uses an array of waveguides that are interconnected in a way that can be modified as needed, programming that set of beams for a specific computation. “You can program in any matrix operation,” Harris says. The processor guides light through a series of coupled photonic waveguides. The team’s full proposal calls for interleaved layers of devices that apply an operation called a nonlinear activation function, in analogy with the operation of neurons in the brain.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with this rudimentary system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, Solja?i? says.

Englund adds that the programmable nanophotonic processor could have other applications as well, including signal processing for data transmission. “High-speed analog signal processing is something this could manage” faster than other approaches that first convert the signal to digital form, since light is an inherently analog medium. “This approach could do processing directly in the analog domain,” he says.

The team says it will still take a lot more effort and time to make this system useful; however, once the system is scaled up and fully functioning, it can find many user cases, such as data centers or security systems. The system could also be a boon for self-driving cars or drones, says Harris, or “whenever you need to do a lot of computation but you don’t have a lot of power or time.”

Learn more: New system allows optical “deep learning”


The Latest on: Neural networks
  • Neural Network Glorot Initialization
    on June 21, 2017 at 10:41 am

    You’d think that initializing the weights and biases in a neural network wouldn’t be very difficult or interesting. No so. The simplest way to initialize weights and biases is to set them to small (perhaps -0.01 to +0.01) uniform random values. […]

  • A deep neural network of light
    on June 21, 2017 at 8:36 am

    Our brain’s neurons are, in essence, living logic gates: They take averages of the signals they receive from their neighbors and, depending on the results, either fire or don’t. In artificial neural networks, that process is replicated using matrix ... […]

  • Neural Network Generates Adorable Names for Rescue Guinea Pigs
    on June 21, 2017 at 6:09 am

    The Portland Guinea Pig Rescue has provided care to many of the fluffy rodents in need of a forever home. But coming up with creative names for the ever-shifting roster of guinea pigs can be challenging, so the organization’s staff came up with an ... […]

  • A Neural Network Turned a Book of Flowers Into Shockingly Lovely Dinosaur Art
    on June 20, 2017 at 9:15 pm

    The estate of M.C. Escher may have just lost its lucrative stranglehold on the dorm room poster market thanks to artist Chris Rodley, who used a deep learning algorithm to merge a book of dinosaurs with a book of flower paintings. The results are ... […]

  • A guinea pig rescue center had a neural network brilliantly name their fluffballs of joy
    on June 19, 2017 at 12:55 am

    Neural networks are good at a lot of things. The technology is used in a variety of contexts, from finance to self-driving cars. I’m going to go out on a limb here, but I think this is the best use of neural networks, perhaps ever? A guinea pig rescue ... […]

  • Neural Networks are Now Being Used to Represent Quantum Systems
    on June 18, 2017 at 6:48 am

    An artist's rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions. (Credit: E. Edwards/JQI) Modern technology has learned ... […]

  • Creating Neural Networks in Python
    on June 16, 2017 at 10:44 am

    Neural networks attempt to mimic the behavior of real neurons by modeling the transmission of information between nodes (simulated neurons) of an artificial neural network. The networks “learn” in an adaptive way, continuously adjusting parameters ... […]

  • How to Get Reproducible Results from Neural Networks with Keras
    on June 13, 2017 at 12:42 pm

    Neural network algorithms are stochastic. This means they make use of randomness, such as initializing to random weights, and in turn the same network trained on the same data can produce different results. This can be confusing to beginners as the ... […]

  • Light-based neural network does simple speech recognition
    on June 13, 2017 at 1:20 am

    While there are lots of things that artificial intelligence can't do yet—science being one of them—neural networks are proving themselves increasingly adept at a huge variety of pattern recognition tasks. These tasks can range anywhere from recognizing ... […]

  • Neural networks take on quantum entanglement
    on June 13, 2017 at 1:00 am

    An artist's rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions. Credit: E. Edwards/JQI Machine learning, the field that ... […]

via Google News and Bing News

Other Interesting Posts

Leave a Reply

%d bloggers like this: