Is it a computer program or a living being? At TU Wien (Vienna), the boundaries become blurred. The neural system of a nematode was translated into computer code – and then the virtual worm was taught amazing tricks.
It is not much to look at: the nematode C. elegans is about one millimetre in length and is a very simple organism. But for science, it is extremely interesting. C. elegans is the only living being whose neural system has been analysed completely. It can be drawn as a circuit diagram or reproduced by computer software, so that the neural activity of the worm is simulated by a computer program.
Such an artificial C. elegans has now been trained at TU Wien (Vienna) to perform a remarkable trick: The computer worm has learned to balance a pole at the tip of its tail.
The Worm’s Reflexive behaviour as Computer Code
C. elegans has to get by with only 300 neurons. But they are enough to make sure that the worm can find its way, eat bacteria and react to certain external stimuli. It can, for example, react to a touch on its body. A reflexive response is triggered and the worm squirms away.
This behaviour can be perfectly explained: it is determined by the worm’s nerve cells and the strength of the connections between them. When this simple reflex-network is recreated on a computer, then the simulated worm reacts in exactly the same way to a virtual stimulation – not because anybody programmed it to do so, but because this kind of behaviour is hard-wired in its neural network.
“This reflexive response of such a neural circuit, is very similar to the reaction of a control agent balancing a pole”, says Ramin Hasani (Institute of Computer Engineering, TU Wien). This is a typical control problem which can be solved quite well by standard controllers: a pole is fixed on its lower end on a moving object, and it is supposed to stay in a vertical position. Whenever it starts tilting, the lower end has to move slightly to keep the pole from tipping over. Much like the worm has to change its direction whenever it is stimulated by a touch, the pole must be moved whenever it tilts.
Mathias Lechner, Radu Grosu and Ramin Hasani wanted to find out, whether the neural system of C. elegans, uploaded to a computer, could solve this problem – without adding any nerve cells, just by tuning the strength of the synaptic connections. This basic idea (tuning the connections between nerve cells) is also the characteristic feature of any natural learning process.
A Program without a Programmer
“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer”, Mathias Lechner explains. And indeed, the team succeeded in teaching the virtual nerve system to balance a pole. “The result is a controller, which can solve a standard technology problem – stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system”, says Radu Grosu.
The team is going to explore the capabilities of such control-circuits further. The project raises the question, whether there is a fundamental difference between living nerve systems and computer code. Is machine learning and the activity of our brain the same on a fundamental level? At least we can be pretty sure that the simple nematode C. elegans does not care whether it lives as a worm in the ground or as a virtual worm on a computer hard drive.
The Latest on: Reinforcement learning
via Google News
The Latest on: Reinforcement learning
- Ciena uses machine learning to heal the scars, horror of network management on April 18, 2019 at 1:52 pm
Instead, Blue Planet tools can train the network software using a combination of labeled examples, known as supervised learning and reinforcement learning, where the computer explores states of ... […]
- Bayesian nonparametric models characterize instantaneous strategies in a competitive dynamic game on April 18, 2019 at 2:21 am
Adopting a reinforcement learning approach, we use Gaussian Processes to model the policy and value functions of participants as a function of both game state and opponent identity. We found that ... […]
- When it comes to learning, what's better—the carrot or the stick? on April 17, 2019 at 6:08 am
The UNIGE researchers tested 84 participants to investigate confidence bias in the context of reward or punishment-based learning, known as reinforcement learning. "The principle is simple," says ... […]
- Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 on April 16, 2019 at 8:30 am
A team of researchers, namely, Ismail Akrout, Amal Feriani, and Mohamed Akrout, published a paper, titled ‘Hacking Google reCAPTCHA v3 using Reinforcement Learning’, last month. In the paper, ... […]
- Machine learning bots beat DOTA 2 world champions in best-of-three on April 16, 2019 at 6:29 am
"We started OpenAI Five in order to work on a problem that felt outside of the reach of existing deep reinforcement learning algorithms," said OpenAI in a blog post. "We hoped that by working on a ... […]
- The serial blocking effect: a testbed for the neural mechanisms of temporal-difference learning on April 12, 2019 at 2:18 am
Temporal-difference (TD) learning models afford the neuroscientist a theory-driven roadmap in the quest for the neural mechanisms of reinforcement learning. The application of these models to ... […]
- AI and machine learning will throw bigger punches at ad fraud on April 11, 2019 at 2:16 am
especially through the adoption of deep learning and reinforcement-learning techniques based on neural networks. These two technologies depend less on existing domain knowledge and ensure optimised ... […]
- Best of arXiv.org for AI, Machine Learning, and Deep Learning – March 2019 on April 9, 2019 at 9:27 am
Machine learning, especially deep neural networks, has been rapidly developed in fields including computer vision, speech recognition and reinforcement learning. Although Mini-batch SGD is one of the ... […]
- Startup Says End-To-End Machine Learning Is "Only Scalable Solution" For Self-Driving Cars on April 8, 2019 at 2:49 pm
Wayve.ai is taking an end-to-end machine learning approach to building a self-driving car. Rather than create a platform and feed the rules of the road, their system uses imitation and reinforcement ... […]
via Bing News