#### Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?

Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.

“Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.”

Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.

The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.

The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights.

The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations.

The trained synaptic weights, encoded deterministically in the magnetization state of the nano-magnets, are then used during inference. Advantageously, use of high-energy barrier magnets (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. However, the barrier height of the nano-magnets used to perform sigmoid-like neuronal computations can be lowered to 20KT for higher energy efficiency.

“The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.”

Roy said the brain-like networks have other uses in solving difficult problems as well, including combinatorial optimization problems such as the traveling salesman problem and graph coloring. The proposed stochastic devices can act as “natural annealer”, helping the algorithms move out of local minimas.

Learn more: A magnetic personality, maybe not. But magnets can help AI get closer to the efficiency of the human brain

##### The Latest on: Stochastic neural networks

*via Google News*

##### The Latest on: Stochastic neural networks

- AI: The End Of Premature Deaths?on April 7, 2020 at 5:33 am
Starting with deep learning, they implemented the algorithm on a multi-layer feed-forward artificial neural network with stochastic gradient descent by using backpropagation. The train data collected ...

- SIAM Announces Class of 2020 Fellowson March 31, 2020 at 8:56 am
George Cybenko, Dartmouth College, is being recognized for contributions to theory and algorithms in signal processing, artificial neural networks, and distributed computing systems ... is being ...

- Memory devices and applications for in-memory computingon March 30, 2020 at 8:17 am
In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, ...

- Solves ordinary differential equations and partial differential equations using neural netson March 29, 2020 at 2:49 pm
The repository is for the development of neural network solvers of differential equations. It utilizes techniques like neural stochastic differential equations to make it practical to solve high ...

- Generalization properties of neural network approximations to frustrated magnet ground stateson March 27, 2020 at 3:11 am
A simple yet very unrestricted variational ansatz that inherits the structure of a certain neural network—restricted Boltzmann machine (RBM)—was suggested. For the test cases of one-dimensional and ...

- How to Train a Machine Learning Radial Basis Function Network Using C#on March 23, 2020 at 5:00 pm
Unlike deep neural networks, which are very sensitive to initial weight ... update biases here } } . . . This version of stochastic gradient descent is called online training. All weights are updated ...

- Human Activity Recognition using CNN & LSTMon March 23, 2020 at 2:27 am
We cannot judge the skill of the model from a single evaluation. The reason for this is that neural networks are stochastic, meaning that a different specific model will result when training the ...

- Speeding up machine learningon March 16, 2020 at 6:10 am
Their architecture aims to speed up and improve accuracy for a particular class of machine learning algorithm called Deep Neural Networks (DNNs ... took advantage of a mathematical technique called ...

- Stochastic Modeling, Applied Mathematics, and Statisticson March 15, 2020 at 5:00 pm
The student engaged in this summer research opportunity will work on cutting edge mathematics and computational methods for stochastic modeling, Monte Carlo simulation, rare events and neural networks ...

- Graph Neural Network model calibration for trusted predictionson March 5, 2020 at 5:15 pm
Graph neural networks (GNNs) are a fast developing machine ... to a continuous vector representation trainable via stochastic gradient descent. These representations can be used as input to ...

*via Bing News*