Is it a computer program or a living being? At TU Wien (Vienna), the boundaries become blurred. The neural system of a nematode was translated into computer code – and then the virtual worm was taught amazing tricks.
It is not much to look at: the nematode C. elegans is about one millimetre in length and is a very simple organism. But for science, it is extremely interesting. C. elegans is the only living being whose neural system has been analysed completely. It can be drawn as a circuit diagram or reproduced by computer software, so that the neural activity of the worm is simulated by a computer program.
Such an artificial C. elegans has now been trained at TU Wien (Vienna) to perform a remarkable trick: The computer worm has learned to balance a pole at the tip of its tail.
The Worm’s Reflexive behaviour as Computer Code
C. elegans has to get by with only 300 neurons. But they are enough to make sure that the worm can find its way, eat bacteria and react to certain external stimuli. It can, for example, react to a touch on its body. A reflexive response is triggered and the worm squirms away.
This behaviour can be perfectly explained: it is determined by the worm’s nerve cells and the strength of the connections between them. When this simple reflex-network is recreated on a computer, then the simulated worm reacts in exactly the same way to a virtual stimulation – not because anybody programmed it to do so, but because this kind of behaviour is hard-wired in its neural network.
“This reflexive response of such a neural circuit, is very similar to the reaction of a control agent balancing a pole”, says Ramin Hasani (Institute of Computer Engineering, TU Wien). This is a typical control problem which can be solved quite well by standard controllers: a pole is fixed on its lower end on a moving object, and it is supposed to stay in a vertical position. Whenever it starts tilting, the lower end has to move slightly to keep the pole from tipping over. Much like the worm has to change its direction whenever it is stimulated by a touch, the pole must be moved whenever it tilts.
Mathias Lechner, Radu Grosu and Ramin Hasani wanted to find out, whether the neural system of C. elegans, uploaded to a computer, could solve this problem – without adding any nerve cells, just by tuning the strength of the synaptic connections. This basic idea (tuning the connections between nerve cells) is also the characteristic feature of any natural learning process.
A Program without a Programmer
“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer”, Mathias Lechner explains. And indeed, the team succeeded in teaching the virtual nerve system to balance a pole. “The result is a controller, which can solve a standard technology problem – stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system”, says Radu Grosu.
The team is going to explore the capabilities of such control-circuits further. The project raises the question, whether there is a fundamental difference between living nerve systems and computer code. Is machine learning and the activity of our brain the same on a fundamental level? At least we can be pretty sure that the simple nematode C. elegans does not care whether it lives as a worm in the ground or as a virtual worm on a computer hard drive.
The Latest on: Reinforcement learning
Machine Learning with R and SQL Server 2017
on July 18, 2018 at 7:32 am
The types of ML learning can be found in distinct reference materials. Usually, the process of ML classifies into three categories as Supervised, Unsupervised and Reinforcement Learning. Supervised ML ... […]
Quick guide to understand the hype around ‘deep learning’
on July 14, 2018 at 12:39 pm
Clustering through unsupervised learning Reinforcement learning — Here it’s about training the system to achieve goals. The way it’s done is by giving the system a reward if it achieves something, and ... […]
What machine learning means for software development
on July 11, 2018 at 4:24 am
These tools aren’t particularly new: bandit algorithms for A/B testing have been around for some time, and for many companies, bandit algorithms will be the first step toward reinforcement learning. C... […]
Best of arXiv.org for AI, Machine Learning, and Deep Learning – June 2018
on July 10, 2018 at 4:38 pm
This paper introduces an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured pe... […]
4 Questions to Ask Before You Start a Machine Learning Project
on July 9, 2018 at 10:49 am
Currently, there are three major types of machine learning: supervised, unsupervised, and reinforcement learning. Let’s check out the use cases for each one of them. Nearly 90 percent of current machi... […]
Deep Reinforcement Learning Nanodegree Program: What You’ll Learn
on July 9, 2018 at 8:26 am
Deep Reinforcement Learning is the hottest research field in artificial intelligence, and the closest we’ve yet come to developing AI that can learn and develop like a human does! While there have bee... […]
Introduction to Reinforcement Learning (Coding Q-Learning) — Part 3
on July 9, 2018 at 5:51 am
In the previous part, we saw what an MDP is and what is Q-learning. Now in this part, we’ll see how to solve a finite MDP using Q-learning and code it. As stated on the official website of OpenAI gym: ... […]
Samsung tops AI challenge using Reinforcement Learning
on July 9, 2018 at 4:27 am
Samsung has placed top in two global artificial intelligence machine reading Comprehension competitions, through its R&D hub Samsung Research. The company's research arm came top at both MS MARCO, the ... […]
Machine learning to assist in building muscle
on July 5, 2018 at 8:23 pm
Insilico pioneered the applications of the generative adversarial networks (GANs) and reinforcement learning for generation of novel molecular structures for the diseases with a known target and with ... […]
4 Reasons Why Companies Struggle To Adopt Deep Learning
on July 5, 2018 at 5:54 am
The data 2.0 strategy will be driven by new AI techniques like deep learning, reinforcement learning, and Bayesian machine learning. Deep learning is the state-of-the-art approach to AI across machine ... […]
via Google News and Bing News