Autonomous systems — like driverless cars — perform tasks that previously could only be performed by humans. In a new IEEE Intelligent Systems Expert Opinion piece, Carnegie Mellon University artificial intelligence ethics experts David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that autonomous systems will perform safely and reliably.
“Currently, we ensure safety on the roads by regulating the performance of the various mechanical systems of vehicles and by licensing drivers,” said London, professor of philosophy and director of the Center for Ethics and Policy in the Dietrich College of Humanities and Social Sciences. “When cars drive themselves we have no comparable system for evaluating the safety and reliability of their autonomous driving systems.”
Danks and London point to the Department of Transportation’s recent attempt to develop safety regulations for driverless cars as an example of traditional guidelines that do not adequately test and monitor the novel capabilities of autonomous systems. Instead, they suggest creating a staged, dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring.
“Self-driving cars and autonomous systems are rapidly spreading through every part of society, but their successful use depends on whether we can trust and understand them,” said Danks, the L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy. “We, as a society, need to find new ways to monitor and guide the development and implementation of these autonomous systems.”
The phased process Danks and London propose would begin with “pre-clinical trials,” or testing in simulated environments, such as self-driving cars navigating varied landscapes and climates. This would provide information about how the autonomous system makes decisions in a wide range of contexts, so that we can understand how they might act in future, new situations.
Acceptable performance would permit the system to move on to “in-human” studies through a limited introduction into real-world environments with trained human “co-pilots.” Successful trials in these targeted environments would then lead to monitored, permit-based testing, and further easing of restrictions as performance goals were met.
Danks and London propose that this regulatory system should be modeled and managed similarly to how the Food and Drug Administration regulates the drug approval process.
“Autonomous vehicles have the potential to save lives and increase economic productivity. But these benefits won’t be realized unless the public has credible assurance that such systems are safe and reliable,” London said.
Receive an email update when we add a new ARTIFICIAL INTELLIGENCE ETHICS article.
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- Country-wide workshops teach artists about artificial intelligenceon November 15, 2019 at 2:59 pm
“If artists aren’t engaging in this huge topic, then a lot of voices won’t be heard that might inform a more ethical and a more human approach to rolling out this technology,” workshop instructor ...
- AntWorks partners with SEED Group to drive adoption of Artificial Intelligence in the GCCon November 13, 2019 at 11:08 pm
SINGAPORE, Nov. 14, 2019 /PRNewswire/ -- AntWorks™, a global provider of artificial intelligence and intelligent automation solutions powered ... and will work with AntWorks to offer ethical AI ...
- The ethical, social and Jewish implications of Artificial Intelligenceon November 13, 2019 at 1:42 am
Artificial Intelligence (AI) is the most important ethical issue of our age. It already impacts on so much of our social interactions and work-life patterns. It’s certain that it will continue ...
- Why Are American Companies Helping China Build an Artificial Intelligence Authoritarian State?on November 6, 2019 at 1:38 pm
Why Are American Companies Helping China Build an Artificial Intelligence Authoritarian State? If the future looks like China’s digital autocracy, then lawmakers ought to set a new standard for ...
- Government seeking to prevent Terminator robots with ethical artificial intelligenceon November 6, 2019 at 9:45 am
Artificial intelligence should respect human rights, diversity and privacy — while being a far cry from Terminator-style robots — according to new federal ethics guidelines. The Federal Government has ...
- Government's ethical artificial intelligence vision a far cry from Terminator-style robotson November 6, 2019 at 9:31 am
Artificial intelligence should respect human rights, diversity and privacy — while being a far cry from Terminator-style robots — according to new federal ethics guidelines. Technology Minister Karen ...
- Artificial intelligence might be reading your license plate. Here's the company behind iton November 6, 2019 at 7:52 am
Police cars equipped with artificial intelligence cameras will automatically "look" at numerous license plates ... This announcement comes a full year before the Fleet 3 system is rolled out, allowing ...
- Telstra, NAB and CBA sign up for artificial intelligence ethics trialson November 6, 2019 at 5:12 am
Commonwealth Bank of Australia, Telstra and National Australia Bank are among five companies to have signed on to trial a new set of principles governing the development of systems using artificial ...
- Election security, Artificial Intelligence among future threats on Pentagon’s radaron November 6, 2019 at 3:00 am
Artificial Intelligence, sometimes called “machine learning,” refers to advanced computer algorithms that can use data to “learn” and therefore make choices without human input. Last week a Pentagon ...
via Bing News