Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?
Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.
“Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.”
The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.
The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights.
The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations.
The trained synaptic weights, encoded deterministically in the magnetization state of the nano-magnets, are then used during inference. Advantageously, use of high-energy barrier magnets (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. However, the barrier height of the nano-magnets used to perform sigmoid-like neuronal computations can be lowered to 20KT for higher energy efficiency.
“The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.”
Roy said the brain-like networks have other uses in solving difficult problems as well, including combinatorial optimization problems such as the traveling salesman problem and graph coloring. The proposed stochastic devices can act as “natural annealer”, helping the algorithms move out of local minimas.
The Latest on: Stochastic neural networks
via Google News
The Latest on: Stochastic neural networks
- Catastrophic Forgetting: Learning’s Effect On Machine Mindson October 2, 2020 at 5:00 pm
Neural networks in self-driving cars can recognize ... training algorithm that was subject to catastrophic forgetting (Stochastic Gradient Descent, SGD). The remaining ten games did slightly ...
- New AI Paradigm May Reduce a Heavy Carbon Footprinton September 17, 2020 at 2:31 am
Artificial intelligence (AI) machine learning can have a considerable carbon footprint. Deep learning is inherently costly, as it requires massive computational and energy resources. Now ...
- Neural Networks Without Matrix Mathon September 16, 2020 at 5:00 pm
Neural network training is time consuming and expensive ... and repeated adjustments give rise to stochastic gradient descent. Rather than providing an explicit prediction as conventional algorithms ...
- Apple wins a Major Patent Describing Key Autonomous Vehicle Decision-Making Systems & Processes that Control Motionon September 8, 2020 at 10:28 am
which involves a combination of stochastic, deterministic and learning-based techniques. (Click on image to Enlarge) Apple's patent FIG. 7 below illustrates an overview of neural network models ...
- Systems Engineeringon August 19, 2020 at 5:00 pm
and neural network applications involving functional approximations and learning. Meets with ENGEC710 and ENGME710. Students may not receive credit for both. ENG SE 714: Advanced Stochastic Modeling ...
- Graduate Studentson August 16, 2020 at 8:24 am
A minimal-model perpective of emergent phenomena in a network of coupled mixed-mode oscillators motived by a neural network in the mammalian ... large deviations in nonlinear optical systems with ...
- EECE.5160 Biomedical Imaging and Data Scienceon April 25, 2020 at 7:13 pm
An introduction to machine learning and signal processing for medical imaging and big data analytics. Overview of medical image reconstruction, registration, denoising, deblurring, and segmentation.
- Monographs in Behavior and Ecologyon November 15, 2019 at 2:58 am
The technique employs stochastic dynamic programming and permits the analysis ... His work illustrates the value of an... Neural Networks and Animal Behavior Magnus Enquist and Stefano Ghirlanda How ...
- Recurrent Neural Networks cheatsheeton November 29, 2018 at 9:51 am
Architecture of a traditional RNN Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while ...
via Bing News