An international debate is needed on the use of autonomous military robots, a leading academic has said.
Noel Sharkey of the University of Sheffield said that a push toward more robotic technology used in warfare would put civilian life at grave risk.
Technology capable of distinguishing friend from foe reliably was at least 50 years away, he added.
However, he said that for the first time, US forces mentioned resolving such ethical concerns in their plans.
“Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas,” Professor Sharkey said at a meeting in London.
“The problem is that this is all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction.”
Professor Sharkey, a professor of artificial intelligence and robotics, has long drawn attention to the psychological distance from the horrors of war that is maintained by operators who pilot unmanned aerial vehicles (UAVs), often from thousands of miles away.
“These guys who are driving them sit there all day…they go home and eat dinner with their families at night,” he said.
“It’s kind of a very odd way of fighting a war – it’s changing the character of war dramatically.”
The rise in technology has not helped in terms of limiting collateral damage, Professor Sharkey said, because the military intelligence behind attacks was not keeping pace.
Between January 2006 and April 2009, he estimated, 60 such “drone” attacks were carried out in Pakistan. While 14 al-Qaeda were killed, some 687 civilian deaths also occurred, he said.
That physical distance from the actual theatre of war, he said, led naturally to a far greater concern: the push toward unmanned planes and ground robots that make their decisions without the help of human operators at all.
The problem, he said, was that robots could not fulfil two of the basic tenets of warfare: discriminating friend from foe, and “proportionality”, determining a reasonable amount of force to gain a given military advantage.
“Robots do not have the necessary discriminatory ability,” he explained.
“They’re not bright enough to be called stupid – they can’t discriminate between civilians and non-civilians; it’s hard enough for soldiers to do that.
“And forget about proportionality, there’s no software that can make a robot proportional,” he added.
“There’s no objective calculus of proportionality – it’s just a decision that people make.”
Related articles by Zemanta
- Military killer robots ‘could endanger civilians’ (telegraph.co.uk)
- Expert Warns Of ‘Terminator’ Robot Threat (news.sky.com)
- What’s Behind Japan’s Love Affair with Robots? (time.com)
- March of the killer robots (telegraph.co.uk)
- Toyota Creates Running Humanoid Robot (shoppingblog.com)
- Want responsible robotics? Start with responsible humans (scienceblog.com)
- Artificial Ethics (books.slashdot.org)