The DQN network learned how to play classic video games including Space Invaders and Breakout without programming
Artificial intelligence has taken a major step forward after Google created a network which learned to play a range of computer games on its own without being pre-programmed.
The Deep Q Network (DQN) was given just the basic data from one Atari game and an algorithm which learned by trying out different scenarios to come up with the best score.
Without any further programming the network worked out how to play a further 48 classic video games including Space Invaders and Breakout.
Demis Hassabis of Google’s artificial intelligence arm DeepMind said the ultimate goal was to create a computer which had the mental capabilities of a toddler.
“This work is the first time that anyone has built a single general learning system that can learn directly from experience to learn a wide range of challenging tasks,” he said.
“In this case a set of Atari games and perform at better or human level on those fames
“DQN can learn to play dozens of the games straight out of the box. We don’t preprogramme it between its games.
“It has minimal sets of assumptions and all it gets access to are the raw pixel inputs and the game score and from there it has to figure out what it controls in the game world and how to get points and master the game just by playing the game directly.
“It’s the first artificial agent that is capable of learning to excel over a diverse array of challenging tasks.”