Feb 072018

The analogue, natural version of C.elegans.

Is it a computer program or a living being? At TU Wien (Vienna), the boundaries become blurred. The neural system of a nematode was translated into computer code – and then the virtual worm was taught amazing tricks.

In real life, the worm reacts to touch – and the same neural circuits can perform tasks in the computer.

It is not much to look at: the nematode C. elegans is about one millimetre in length and is a very simple organism. But for science, it is extremely interesting. C. elegans is the only living being whose neural system has been analysed completely. It can be drawn as a circuit diagram or reproduced by computer software, so that the neural activity of the worm is simulated by a computer program.

Such an artificial C. elegans has now been trained at TU Wien (Vienna) to perform a remarkable trick: The computer worm has learned to balance a pole at the tip of its tail.

Mathias Lechner, Ramin Hasani und Radu Grosu (left to right)

The Worm’s Reflexive behaviour as Computer Code
C. elegans has to get by with only 300 neurons. But they are enough to make sure that the worm can find its way, eat bacteria and react to certain external stimuli. It can, for example, react to a touch on its body. A reflexive response is triggered and the worm squirms away.

This behaviour can be perfectly explained: it is determined by the worm’s nerve cells and the strength of the connections between them. When this simple reflex-network is recreated on a computer, then the simulated worm reacts in exactly the same way to a virtual stimulation – not because anybody programmed it to do so, but because this kind of behaviour is hard-wired in its neural network.

“This reflexive response of such a neural circuit, is very similar to the reaction of a control agent balancing a pole”, says Ramin Hasani (Institute of Computer Engineering, TU Wien). This is a typical control problem which can be solved quite well by standard controllers: a pole is fixed on its lower end on a moving object, and it is supposed to stay in a vertical position. Whenever it starts tilting, the lower end has to move slightly to keep the pole from tipping over. Much like the worm has to change its direction whenever it is stimulated by a touch, the pole must be moved whenever it tilts.

Mathias Lechner, Radu Grosu and Ramin Hasani wanted to find out, whether the neural system of C. elegans, uploaded to a computer, could solve this problem – without adding any nerve cells, just by tuning the strength of the synaptic connections. This basic idea (tuning the connections between nerve cells) is also the characteristic feature of any natural learning process.

A Program without a Programmer
“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer”, Mathias Lechner explains. And indeed, the team succeeded in teaching the virtual nerve system to balance a pole. “The result is a controller, which can solve a standard technology problem – stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system”, says Radu Grosu.

The team is going to explore the capabilities of such control-circuits further. The project raises the question, whether there is a fundamental difference between living nerve systems and computer code. Is machine learning and the activity of our brain the same on a fundamental level? At least we can be pretty sure that the simple nematode C. elegans does not care whether it lives as a worm in the ground or as a virtual worm on a computer hard drive.

Learn more: Worm Uploaded to a Computer and Trained to Balance a Pole


The Latest on: Reinforcement learning
  • Training AI & Robots in VR with NVIDIA’s Project Holodeck
    on February 24, 2018 at 10:14 am

    They can use a unified code base of AI algorithms for deep reinforcement learning within VR, and then apply that same code base to drive a physical Baxter robot. This creates a safe context to train and debug the behavior of the robot within a virtual ... […]

  • Good Robot! Bad Robot! The Future of Robotic Feedback Deep Learning
    on February 23, 2018 at 8:26 am

    The original Training an Agent Manually via Evaluative Reinforcement (TAMER) algorithm allowed human operators to provide real-time feedback in the form criticism against the robotic performance, influencing the robot’s programming and learning. […]

  • Reinforcement learning woes, robot doggos, Amazon's homegrown AI chips, and more
    on February 22, 2018 at 6:42 pm

    America's broadband speed map is back! And it doesn't totally suck! Roundup Hello! Here's a brief roundup of some interesting news from the AI world from the past two weeks, beyond what we've already reported. Behold a fascinating, honest explanation of ... […]

  • Study shows that two different brain systems cooperate during learning
    on February 22, 2018 at 11:19 am

    The study, published in Proceedings of the National Academy of Sciences, focused on the interplay of two very different modes of learning a new task: reinforcement learning and working memory. Reinforcement learning is an "under-the-hood" process in which ... […]

  • Separate brain systems cooperate during learning, study finds
    on February 21, 2018 at 12:38 pm

    Brown University researchers have shown that reinforcement learning and working memory — two distinct brain systems — work hand-in-hand as people learn new tasks. PROVIDENCE, R.I. [Brown University] — A new study by Brown University researchers shows ... […]

  • A Majority of Data Scientists Lack Competency in Advanced Machine Learning Areas and Techniques
    on February 20, 2018 at 12:16 pm

    A majority of data professionals lack competency in many advanced machine learning areas and techniques like neural networks, evolutionary techniques, reinforcement learning and adversarial learning. If you’re an aspiring or practicing data professional ... […]

  • Learning Data Science — My Journey
    on February 17, 2018 at 8:07 am

    I could not finish some of them. I tried using both Supervised and Unsupervised Learning. Neither was fitting in. That lead me to the third type of ML. Reinforcement Learning (RL). Andrew’s course did not cover that. I found a course on Reinforcement ... […]

  • Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind
    on February 17, 2018 at 12:00 am

    In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company’s machine learning technology and some of the challenges ahead. DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such ... […]

  • New Linguistic Intelligence Platform for Deep Reinforcement Learning
    on February 15, 2018 at 2:22 pm

    JERUSALEM, Feb. 15, 2018 /PRNewswire/ -- Linguistic Agents Ltd., a leader in Natural Language Processing, announced today a new and innovative platform for the training of deep reinforcement learning agents. Clear as Mud Because of recent stories of the ... […]

  • Machine Learning In Portfolio Modeling. What's The Value-Add?
    on January 16, 2018 at 12:59 pm

    For example, with unsupervised learning, the machine will be capable of differentiating red and blue colored balls from a group of balls. Reinforcement learning is learning what to do--how to map situations to actions--so as to maximize a numerical reward ... […]

via Google News and Bing News

Leave a Reply

%d bloggers like this: