A massive new survey developed by MIT researchers reveals some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences.
The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem.” The problem involves scenarios in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. In the case of driverless cars, that might mean swerving toward a couple of people, rather than a large group of bystanders.
“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” says Edmond Awad, a postdoc at the MIT Media Lab and lead author of a new paper outlining the results of the project. “We don’t know yet how they should do that.”
Still, Awad adds, “We found that there are three elements that people seem to approve of the most.”
Indeed, the most emphatic global preferences in the survey are for sparing the lives of humans over the lives of other animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people.
“The main preferences were to some degree universally agreed upon,” Awad notes. “But the degree to which they agree with this or not varies among different groups or countries.” For instance, the researchers found a less pronounced tendency to favor younger people, rather than the elderly, in what they defined as an “eastern” cluster of countries, including many in Asia.
The paper, “The Moral Machine Experiment,” is being published today in Nature.
The authors are Awad; Sohan Dsouza, a doctoral student in the Media Lab; Richard Kim, a research assistant in the Media Lab; Jonathan Schulz, a postdoc at Harvard University; Joseph Henrich, a professor at Harvard; Azim Shariff, an associate professor at the University of British Columbia; Jean-François Bonnefon, a professor at the Toulouse School of Economics; and Iyad Rahwan, an associate professor of media arts and sciences at the Media Lab, and a faculty affiliate in the MIT Institute for Data, Systems, and Society.
Awad is a postdoc in the MIT Media Lab’s Scalable Cooperation group, which is led by Rahwan.
To conduct the survey, the researchers designed what they call “Moral Machine,” a multilingual online game in which participants could state their preferences concerning a series of dilemmas that autonomous vehicles might face. For instance: If it comes right down it, should autonomous vehicles spare the lives of law-abiding bystanders, or, alternately, law-breaking pedestrians who might be jaywalking? (Most people in the survey opted for the former.)
All told, “Moral Machine” compiled nearly 40 million individual decisions from respondents in 233 countries; the survey collected 100 or more responses from 130 countries. The researchers analyzed the data as a whole, while also breaking participants into subgroups defined by age, education, gender, income, and political and religious views. There were 491,921 respondents who offered demographic data.
The scholars did not find marked differences in moral preferences based on these demographic characteristics, but they did find larger “clusters” of moral preferences based on cultural and geographic affiliations. They defined “western,” “eastern,” and “southern” clusters of countries, and found some more pronounced variations along these lines. For instance: Respondents in southern countries had a relatively stronger tendency to favor sparing young people rather than the elderly, especially compared to the eastern cluster.
Awad suggests that acknowledgement of these types of preferences should be a basic part of informing public-sphere discussion of these issues. In all regions, since there is a moderate preference for sparing law-abiding bystanders rather than jaywalkers, knowing these preferences could, in theory, inform the way software is written to control autonomous vehicles.
“The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule,” he says.
Rahwan, for his part, notes that “public interest in the platform surpassed our wildest expectations,” allowing the researchers to conduct a survey that raised awareness about automation and ethics while also yielding specific public-opinion information.
“On the one hand, we wanted to provide a simple way for the public to engage in an important societal discussion,” Rahwan says. “On the other hand, we wanted to collect data to identify which factors people think are important for autonomous cars to use in resolving ethical tradeoffs.”
Beyond the results of the survey, Awad suggests, seeking public input about an issue of innovation and public safety should continue to become a larger part of the dialoge surrounding autonomous vehicles.
“What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions,” Awad says.
Learn more: How should autonomous vehicles be programmed?
The Latest on: Ethics of autonomous vehicles
via Google News
The Latest on: Ethics of autonomous vehicles
- What self-driving cars can’t recognize may be a matter of life and deathon November 17, 2019 at 3:23 pm
algorithms and ethics. “There just seems to be a really naive assumption about various rules – and that the world is gonna be the way the rules are – not necessarily the way the world is.” The Uber ...
- The Future Of Retail: Autonomous Drivers, Conscious Consumers And AI-Powered Everythingon November 7, 2019 at 4:32 am
Many customers have growing ethical concerns that will put real pressure on brands to act more sustainably ... While predictions of self-driving cars filling the streets by 2020 might not come true, ...
- Why 5G is a crucial technology for autonomous vehicleson November 4, 2019 at 8:43 am
If you cross the border into Belgium, for instance, their care is really about society as a whole," said Steven Tiell, senior principal for responsible innovation at Accenture Labs, in a TechRepublic ...
- The Ethics Of Self-Driving Cars Making Deadly Decisionson October 31, 2019 at 5:00 pm
Self-driving cars are ... “OK” for autonomous cars to do (or not to do). It’s a pretty interesting problem and if you’re interested in reading more, check out this other article about How to Help Self ...
- Intel Targets Autonomous Cars and IoT With New Acquisitionon October 28, 2019 at 2:43 am
The idea is to allow these autonomous vehicles to collect visual information and make decision on how to act based on that information. While this may seem like a great advance in technology, it does ...
- AI: Autonomous Vehicleson October 14, 2019 at 5:00 pm
Collaborating with the Institute for Advanced Study in Toulouse (IAST), we have recently convened a Symposium on Trust and Ethics of Autonomous Vehicles (STEAV) that brought together scientists, ...
- Toward ethical standards for autonomous vehicleson October 2, 2019 at 3:09 am
No one knows whether this survey has any implications in providing guidance for companies that develop autonomous vehicles. However, we still have to see how the law will be formulated with regard to ...
- Autonomous Cars Choosing To Hit This, Not That, Creates Ethical/Legal Dilemmaon September 30, 2019 at 8:57 am
Goodall, Ph.D., P.E. research scientist at the Virginia Center for Transportation Innovation and Research, is an expert on the ins and outs of autonomous cars. “When you add ... which creates an ...
- Losing control is the biggest fear for autonomous car users, reveals studyon September 26, 2019 at 1:17 am
However, there are people who believe that autonomous vehicles will make their lives better ... Read also Luciano Floridi, professor of Philosophy and Ethics of Information and director of the Digital ...
via Bing News