Autonomous systems — like driverless cars — perform tasks that previously could only be performed by humans. In a new IEEE Intelligent Systems Expert Opinion piece, Carnegie Mellon University artificial intelligence ethics experts David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that autonomous systems will perform safely and reliably.
“Currently, we ensure safety on the roads by regulating the performance of the various mechanical systems of vehicles and by licensing drivers,” said London, professor of philosophy and director of the Center for Ethics and Policy in the Dietrich College of Humanities and Social Sciences. “When cars drive themselves we have no comparable system for evaluating the safety and reliability of their autonomous driving systems.”
Danks and London point to the Department of Transportation’s recent attempt to develop safety regulations for driverless cars as an example of traditional guidelines that do not adequately test and monitor the novel capabilities of autonomous systems. Instead, they suggest creating a staged, dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring.
“Self-driving cars and autonomous systems are rapidly spreading through every part of society, but their successful use depends on whether we can trust and understand them,” said Danks, the L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy. “We, as a society, need to find new ways to monitor and guide the development and implementation of these autonomous systems.”
The phased process Danks and London propose would begin with “pre-clinical trials,” or testing in simulated environments, such as self-driving cars navigating varied landscapes and climates. This would provide information about how the autonomous system makes decisions in a wide range of contexts, so that we can understand how they might act in future, new situations.
Acceptable performance would permit the system to move on to “in-human” studies through a limited introduction into real-world environments with trained human “co-pilots.” Successful trials in these targeted environments would then lead to monitored, permit-based testing, and further easing of restrictions as performance goals were met.
Danks and London propose that this regulatory system should be modeled and managed similarly to how the Food and Drug Administration regulates the drug approval process.
“Autonomous vehicles have the potential to save lives and increase economic productivity. But these benefits won’t be realized unless the public has credible assurance that such systems are safe and reliable,” London said.
Receive an email update when we add a new ARTIFICIAL INTELLIGENCE ETHICS article.
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- Pseudonymisation high on the agenda on day two of the International CPDP Data Protection and Artificial Intelligence Conferenceon January 23, 2020 at 11:07 pm
LONDON and BRUSSELS, Jan. 24, 2020 /PRNewswire/ -- An industry led discussion on the implementation of pseudonymisation as a technological safeguard alongside data protection legislation was one of ...
- Artificial intelligence researchers create ethics center at University of Michiganon January 23, 2020 at 5:00 pm
ANN ARBOR, MI — Researchers at the University of Michigan have been exploring the need to set ethics standards and policies when it comes to the use of artificial intelligence, and they now have their ...
- Report: Speed up drug development with artificial intelligenceon January 22, 2020 at 5:50 pm
The report found that application of artificial intelligence technologies could help identify new treatments, reduce failure rates in hospitals and result in more efficient and effective drug ...
- Ray Bradbury on War, Recycling, and Artificial Intelligenceon January 22, 2020 at 2:20 pm
The fear that individuals will surrender their ethical compasses to technology is a constant ... Most Certainly Will,” Bryan Walsh suggests that if a “super artificial intelligence” becomes able to ...
- Google CEO: 'Artificial intelligence needs to be regulated'on January 21, 2020 at 9:43 am
Google CEO Sundar Pichai is calling for governments around the world to regulate artificial intelligence, saying the sensitive technology should not be used to "support mass surveillance or violate ...
- The World Economic Forum Jumps On the Artificial Intelligence Bandwagonon January 20, 2020 at 6:19 am
It’s great to see the WEF taking a strong look at AI and then presenting what looks like a very good, introductory, toolkit for boards of directors, but the need for strong ethical positions means ...
- The Ethical Upside to Artificial Intelligenceon January 20, 2020 at 12:54 am
According to some, artificial intelligence (AI) is the new electricity. Like electricity, AI will transform every major industry and open new opportunities that were never possible. However, unlike ...
- Sage and 3DE Bring Artificial Intelligence Training Workshops to Benjamin Banneker High Schoolon January 17, 2020 at 6:46 am
Sage Foundation initiative, FutureMakers, helps Atlanta high school students develop ethical AI skills for jobs of the future ...
- Artificial intelligence, human gene-editing ethics and more at the AAAS Annual Meeting in Seattleon January 13, 2020 at 7:02 am
SEATTLE, Wash. -- The growing use of artificial intelligence in society, how math can help address political gerrymandering, the ethics of human gene-editing, and the spread of infectious disease ...
via Bing News