Ask one technologist and he or she might say that lethal autonomous weapons — machines that can select and destroy targets without human intervention — are the next step in modern warfare, a natural evolution beyond today’s remotely operated drones and unmanned ground vehicles.
Others will decry such systems as an abomination and a threat to International Humanitarian Law (IHL), or the Law of Armed Conflict.
The U.N. Human Rights Council has, for now, called for a moratorium on the development of killer robots. But activist groups like the International Committee for Robot Arms Control (ICRAC) want to see this class of weapon completely banned. The question is whether it is too early — or too late — for a blanket prohibition. Indeed, depending how one defines “autonomy,” such systems are already in use.
From stones to arrows to ballistic missiles, human beings have always tried to curtail their direct involvement in combat, said Ronald Arkin, a computer scientist at Georgia Institute of Technology. Military robots are just more of the same. With autonomous systems, people no longer do the targeting, but they still program, activate and deploy these weapons.
“There will always be a human in the kill chain with these lethal autonomous systems unless you’re making the case that they can go off and declare war like the Cylons,” said Arkin, referring to the warring cyborgs from “Battlestar Galactica.” He added, “I enjoy science-fiction as much as the next person, but I don’t think that’s what this debate should be about at this point in time.”
Peter Asaro, however, is not impressed with this domino theory of agency. A philosopher of science at The New School, in New York, and co-founder of ICRAC, Asaro contends robots lack “meaningful human control” in their use of deadly force. As such, killer robots would be taking the role of moral actors, a position that he doubts they are capable of fulfilling under International Humanitarian Law. That’s why, he says, these systems must be banned.
Choosing targets, ranking values
According to the Law of Armed Conflict, a combatant has a duty to keep civilian casualties to a minimum. This means using weapons in a discriminating fashion and making sure that, when civilians do get killed in action, their incidental deaths are outweighed by the importance of the military objective — a calculation that entails value judgments.
In terms of assessing a battlefield scene, no technology surpasses the ability of the human eye and brain. “It’s very aspirational to think that we’ll get a drone that can pick a known individual out of a crowd. That’s not going to happen for a long, long, long, long time,” said Mary “Missy” Cummings, director of MIT’s Human and Automation Laboratory, and a former F-18 pilot. [Drone Wars: Pilots Reveal Debilitating Stress Beyond Virtual Battlefield]
Still, a fully autonomous aircraft would do much better than a person at, say, picking up the distinctive electronic signature of a radar signal or the low rumbling of a tank. In fact, pilots make most of their targeting errors when they try to do it by sight, Cummings told Live Science.
As for a robot deciding when to strike a target, Arkin believes that human ethical judgments can be programed into a weapons system. In fact, he has worked on a prototype software program called the Ethical Governor, which promises to serve as an internal constraint on machine actions that would violate IHL. “It’s kind of like putting a muzzle on a dog,” he said.
As expected, some have voiced much skepticism regarding the Ethical Governor, and Arkin himself supports “taking a pause” on buildinglethal autonomous weapons. But he doesn’t agree with a wholesale ban on research “until someone can show some kind of fundamental limitation, which I don’t believe exists, that the goals that researchers such as myself have established are unobtainable.”
Of robots and men
Citing the grisly history of war crimes, advocates of automated killing machines argue that, in the future, these cool and calculating systems might actually be more humane than human soldiers. A robot, for example, will not gun down a civilian out of stress, anger or racial hatred, nor will it succumb to bloodlust or revenge and go on a killing spree in some village.
The Latest on: Lethal autonomous weapons
via Google News
The Latest on: Lethal autonomous weapons
- How Reaper drones really carry out airstrikeson January 11, 2020 at 9:26 pm
This included watching real-time lethal strikes alongside crews at Creech Air ... it’s up to the human operators to launch the attack. Reapers are not autonomous, decision-making weapon systems that ...
- Killer robots reconsidered: Could AI weapons actually cut collateral damage?on January 10, 2020 at 3:05 am
I should know. As a senior advisor for the State Department on civilian protection in the Obama administration, I was a member of the US delegation in the UN deliberations on lethal autonomous weapons ...
- Installation of lethal weapons: India’s harrowing gameon January 9, 2020 at 10:15 am
AJK President Sardar Masood Khan said that the Indians have installed lethal weapons at the Line of Control ... but” he also promised to strip Kashmir of its semi-autonomous status. He has now taken ...
- India has installed lethal weapons at LoC: Masoodon January 1, 2020 at 4:00 pm
MUZAFFARABAD: Kashmiri activists chant slogans at a rally organised by Pasban-i-Hurriyat on Wednesday. Most of the participants were from divided families. Uzair Ahmed Ghazali, chairman of ...
- A.I WARNING: The terrifying future of 'lethal' advanced A.I warfare exposed by experton December 31, 2019 at 11:00 pm
He said: “There are a number of organisations that are looking at what they call lethal autonomous weapon systems. This is artificial intelligence applied to war. “Mostly they are saying that this is ...
- Killer robots are not a fantasy. The world must reject and block these weapons.on December 30, 2019 at 1:07 am
Killer robots are now seen as one of the top existential threats faced by the planet. A growing number of countries and some unlikely allies are now backing the drive for a new treaty to prohibit ...
- AI Experts Stand Up to Lethal Autonomous Weaponson December 23, 2019 at 4:00 pm
More than 2,500 AI researchers have vowed to steer clear of lethal autonomous weapons. Google DeepMind and resident robot pessimist Elon Musk are among the organizations and individuals who pledged to ...
- Can the use of AI weapons be banned?on December 18, 2019 at 10:21 am
But that may lead to a nightmare-- a world where AI-powered weapons kill people based on their own judgement, and without any human intervention. Full autonomous lethal weapons powered by AI are now ...
- Death of efforts to regulate autonomous weapons has been greatly exaggeratedon December 18, 2019 at 5:51 am
The forecast looks equally gloomy for efforts to regulate emerging military technologies, such as lethal autonomous weapons systems, or LAWS. For example the Campaign to Stop Killer Robots has been ...
- Killer Robots Aren’t Regulated. Yet.on December 13, 2019 at 2:02 am
“Yay.” [cheering] Jody’s here with Mary Wareham. “So this is the sixth time that governments have come together since 2014 to talk about what they call lethal autonomous weapons systems.” We’re going ...
via Bing News