There has been much discussion of late of the ethics of artificial intelligence (AI), especially regarding robot weapons development and a related but more general discussion about AI as an existential threat to humanity.
If Skynet of the Terminator movies is going to exterminate us, then it seems pretty tame — if not pointless — to start discussing regulation and liability. But, as legal philosopher John Donaher has pointed out, if these areas are promptly and thoughtfully addressed, that could help to reduce existential risk over the longer term.
In relation to AI, regulation and liability are two sides of the same safety/public welfare coin. Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame — or, more accurately, get legal redress from — when something goes wrong.
The finger of blame
Taking liability first, let’s consider tort (civil wrong) liability. Imagine the following near-future scenario. A driverless tractor is instructed to drill seed in Farmer A’s field but actually does so in Farmer B’s field.
Let’s assume that Farmer A gave proper instructions. Let’s also assume that there was nothing extra that Farmer A should have done, such as placing radio beacons at field boundaries. Now suppose Farmer B wants to sue for negligence (for ease and speed, we’ll ignore nuisance and trespass).
Is Farmer A liable? Probably not. Is the tractor manufacturer liable? Possibly, but there would be complex arguments around duty and standard of care, such as what are the relevant industry standards, and are the manufacturer’s specifications appropriate in light of those standards? There would also be issues over whether the unwanted planting represented damage to property or pure economic loss.
So far, we have implicitly assumed the tractor manufacturer developed the system software. But what if a third party developed the AI system? What if there was code from more than one developer?
Over time, the further that AI systems move away from classical algorithms and coding, the more they will display behaviours that were not just unforeseen by their creators but were wholly unforeseeable. This is significant because foreseeability is a key ingredient for liability in negligence.
To understand the foreseeability issue better, let’s take a scenario where, perhaps only a decade or two after the planting incident above, an advanced, fully autonomous AI-driven robot accidentally injures or kills a human and there have been no substantial changes to the law. In this scenario, the lack of foreseeability could result in nobody at all being liable in negligence.
Blame the AI robot
But would that approach actually make a difference here? As an old friend said to me recently:
Leaving aside whether AI systems can be sued, AI manufacturers and developers will probably have to be put back into the frame. This might involve replacing negligence with strict liability – liability applied without any need to prove fault or negligence.
Strict liability already exists for defective product claims in many places. Alternatively there could be a no fault liability scheme with a claims pool contributed to by the AI industry.
The Latest on: Ethics of artificial intelligence
via Google News
The Latest on: Ethics of artificial intelligence
- Yasser Ibrahim Joins Axon as Artificial Intelligence SVPon March 19, 2020 at 2:50 am
Yasser Ibrahim, former science and engineering leader at Amazon (Nasdaq: AMZN), assumed the role of senior vice president of artificial intelligence at public safety technology provider ... with an ...
- The Evolution of Artificial Intelligence and Future of National Securityon March 13, 2020 at 1:52 pm
Artificial intelligence is all the rage these days ... In policy circles, people wonder about the ethics of AI—such as whether we can really delegate to robots the ability to use lethal force against ...
- Amandeep Singh Gill appointed on UNESCO Panel focusing on Ethics of Artificial Intelligenceon March 12, 2020 at 11:14 am
New Delhi: A national of India, Mr. Amandeep Singh Gill was appointed by UNESCO Director-General Audrey Azoulay, as one of the 24 members of the international expert group charged with drafting ...
- Smart Dubai identifies 40 use cases from 10 different sectors for artificial intelligenceon March 11, 2020 at 5:00 pm
and is also the only country which has launched a strategy for AI.Smart Dubai Department, a government entity tasked with transforming Dubai into a full-fledged smart city, has been working tirelessly ...
- The ethical use of AI on low-wage workerson March 11, 2020 at 1:12 am
The introduction of automation and artificial intelligence-enablFed labor management systems raises significant questions ... Also read: Should there be a code of ethics in technology? Hourly workers ...
- The intelligence community is developing its own AI ethicson March 6, 2020 at 2:18 pm
While less public than the Pentagon's Joint Artificial Intelligence Center, the intelligence community has been developing its own set of principles for the ethical use of artificial intelligence. The ...
- A code of ethics for artificial intelligenceon March 2, 2020 at 10:37 am
The conference stressed the importance of a “Call for Ethics" as the path to evaluate the effects of artificial intelligence and related technologies. Vatican City (AsiaNews) – The Pontifical ...
- How to make artificial intelligence in newsrooms more ethicalon March 2, 2020 at 9:59 am
The truth is, data mirrors human behaviour with all its prejudices, errors and biases. Ethics of artificial intelligence - a concern around how humans design and use machine learning - has been around ...
- DOD Adopts 5 Principles of Artificial Intelligence Ethicson February 24, 2020 at 4:00 pm
Artificial intelligence is the department's top technology modernization priority, DOD Chief Information Officer Dana Deasy said yesterday. The new principles lay the foundation for the ethical ...
- Pentagon adopts ethics for artificial intelligence useon February 24, 2020 at 2:28 pm
The Defense Department has announced adoption of ethical principles for use of artificial intelligence following 15 months of consultation with AI experts in industry, government, academia and the ...
via Bing News