Autonomous systems — like driverless cars — perform tasks that previously could only be performed by humans. In a new IEEE Intelligent Systems Expert Opinion piece, Carnegie Mellon University artificial intelligence ethics experts David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that autonomous systems will perform safely and reliably.
“Currently, we ensure safety on the roads by regulating the performance of the various mechanical systems of vehicles and by licensing drivers,” said London, professor of philosophy and director of the Center for Ethics and Policy in the Dietrich College of Humanities and Social Sciences. “When cars drive themselves we have no comparable system for evaluating the safety and reliability of their autonomous driving systems.”
Danks and London point to the Department of Transportation’s recent attempt to develop safety regulations for driverless cars as an example of traditional guidelines that do not adequately test and monitor the novel capabilities of autonomous systems. Instead, they suggest creating a staged, dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring.
“Self-driving cars and autonomous systems are rapidly spreading through every part of society, but their successful use depends on whether we can trust and understand them,” said Danks, the L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy. “We, as a society, need to find new ways to monitor and guide the development and implementation of these autonomous systems.”
The phased process Danks and London propose would begin with “pre-clinical trials,” or testing in simulated environments, such as self-driving cars navigating varied landscapes and climates. This would provide information about how the autonomous system makes decisions in a wide range of contexts, so that we can understand how they might act in future, new situations.
Acceptable performance would permit the system to move on to “in-human” studies through a limited introduction into real-world environments with trained human “co-pilots.” Successful trials in these targeted environments would then lead to monitored, permit-based testing, and further easing of restrictions as performance goals were met.
Danks and London propose that this regulatory system should be modeled and managed similarly to how the Food and Drug Administration regulates the drug approval process.
“Autonomous vehicles have the potential to save lives and increase economic productivity. But these benefits won’t be realized unless the public has credible assurance that such systems are safe and reliable,” London said.
Receive an email update when we add a new ARTIFICIAL INTELLIGENCE ETHICS article.
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- Tamil Nadu to highlight initiatives in Artificial Intelligence at global summit in Delhion October 6, 2020 at 3:29 am
This comes after Union Minister for Communications and Information Technology Ravi Shankar Prasad appreciated TN for bringing out policies on ethical Artificial Intelligence and cybersecurity ...
- The biggest barrier to humane, ethical AI: Capitalism itselfon October 5, 2020 at 12:21 pm
Over the last several years, a growing chorus of academics, activists, and technologists have decried the ways in which artificial intelligence technology could engender bias, exacerbate inequity, and ...
- Data and artificial intelligence: new perspectives for customers and societyon October 5, 2020 at 4:04 am
The tremendous increase in the volume of data produced globally combined with the accelerated growth in capacity for processing this data – including artificial intelligence (AI) in particular – is ...
- RBC launches ethical AI hub for Canadian firmson October 4, 2020 at 4:24 pm
Royal Bank of Canada's artificial intelligence research unit has launched a programme to help promote "ethical AI".
- News Analysis: Italy, Europe betting on artificial intelligence to help drive post-coronavirus growth, say expertson October 3, 2020 at 3:03 pm
Hopes in Italy are high that artificial intelligence will play a major role in the country's post-coronavirus economic plans, according to analysts. And Italy is not the only country betting on that ...
- AI and ethics: One-third of executives are not aware of potential AI biason October 2, 2020 at 8:08 am
The majority of consumers expect companies to be accountable for their AI systems, yet about half of companies do not have a dedicated member overseeing ethical AI implementation.
- Should we fear artificial intelligence?on October 1, 2020 at 5:43 pm
Weaponised drones, mass surveillance, racial profiling and facial recognition are some of the ways artificial intelligence can be harnessed for sinister purposes.
- Borealis AI launches RESPECT AI™ program to bring ethical and responsible AI to allon October 1, 2020 at 4:00 am
A new survey has found that, while the majority of Canadian businesses believe it is important to implement artificial intelligence (AI) in ...
- Capgemini Press Release// Growing number of businesses recognize need for ethical and trusted AI-powered systems but progress is still patchyon September 30, 2020 at 12:30 am
Capgemini Group| India Tel.: +91 9930835325 Email: [email protected] _____________________________. Capgemini report finds that 70% of customers expect organizations to provide AI ...
via Bing News