Imagine you are in charge of the switch on a trolley track.
The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?
This ethical puzzler is commonly known as the Trolley Problem. It’s a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st century take, several writers (here and here, for example) have adapted the scenario to a modern obsession: autonomous vehicles. Google’s self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on “major roads” by this summer.
Who watches the watchers?
The technology may have arrived, but are we ready?
Google’s cars can already handle real-world hazards, such as cars’ suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google’s cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
“Ultimately, this problem devolves into a choice between utilitarianism and deontology,” said UAB alumnus Ameen Barghi. Barghi, who graduated in May and is headed to Oxford University this fall as UAB’s third Rhodes Scholar, is no stranger to moral dilemmas. He was a senior leader on UAB’s Bioethics Bowl team, which won the 2015 national championship. Their winning debates included such topics as the use of clinical trials for Ebola virus, and the ethics of a hypothetical drug that could make people fall in love with each other. In last year’s Ethics Bowl competition, the team argued another provocative question related to autonomous vehicles: If they turn out to be far safer than regular cars, would the government be justified in banning human driving completely? (Their answer, in a nutshell: yes.)
Death in the driver’s seat
So should your self-driving car be programmed to kill you in order to save others?
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- From Principles to Action: How do we Implement Tech Ethics? on April 18, 2019 at 12:42 am
The tech ethics community is at an inflection point. The broad and pervasive applications of Artificial Intelligence and other technologies are no longer a future possibility but an immediate ... […]
- Artificial intelligence has a diversity crisis on April 17, 2019 at 6:34 am
More recently, Google employees rose up against an IA ethics oversight board that included the ... experienced or witnessed a case of discrimination by a system based on artificial intelligence? You ... […]
- Artificial intelligence is on the brink of a 'diversity disaster' on April 17, 2019 at 5:39 am
The lack of diversity within artificial intelligence is pushing the field to a dangerous ... Earlier this month, for example, Google shut down its AI ethics board just a week after announcing it, and ... […]
- Will Chatbots, Artificial Intelligence And Machine Learning Take The Work Out Of Work? on April 17, 2019 at 5:33 am
It’s no surprise that innovation is “taking the work out of work” with chatbots, artificial intelligence (AI ... Passion Broker, Chief Ethics Hacker…and the list goes on. As Eric Stine, SAP Chief ... […]
- European Commission’s Ethics Guidelines on Artificial Intelligence on April 16, 2019 at 11:45 am
“Artificial intelligence” can be defined as the theory and development of computer systems able to perform tasks that normally require human intervention. Artificial intelligence (AI) is being used in ... […]
- Evangelicals raise spiritual questions about artificial intelligence on April 16, 2019 at 6:44 am
Dozens of people gathered April 11 at an event hosted by the Southern Baptist Convention’s Ethics and Religious Liberty Commission for a discussion about the implications of artificial intelligence on ... […]
- What faith groups have to add to debates on artificial intelligence on April 14, 2019 at 9:10 pm
That's what makes debating artificial intelligence and other technological advancements difficult, he noted. You feel a bit absurd brainstorming the ethical ramifications of replacing factory workers ... […]
- Europe's Quest For Ethics In Artificial Intelligence on April 11, 2019 at 7:20 am
The European Guidelines are also not directly enforceable, but go further than these previous attempts in many respects. They focus on four ethical principles (respect for human autonomy, prevention ... […]
- How Real Are ‘Ethical Artificial Intelligence’ Efforts by Tech Giants? on April 11, 2019 at 6:54 am
The biggest tech companies want the public to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build ... […]
- EU unveils ethics guidelines for artificial intelligence on April 8, 2019 at 5:47 am
The European Union presented ethics guidelines Monday as it seeks to promote its own artificial intelligence sector, which has fallen behind developments in China and the United States. The ... […]
via Bing News