Imagine you are in charge of the switch on a trolley track.
The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?
This ethical puzzler is commonly known as the Trolley Problem. It’s a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st century take, several writers (here and here, for example) have adapted the scenario to a modern obsession: autonomous vehicles. Google’s self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on “major roads” by this summer.
Who watches the watchers?
The technology may have arrived, but are we ready?
Google’s cars can already handle real-world hazards, such as cars’ suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google’s cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
“Ultimately, this problem devolves into a choice between utilitarianism and deontology,” said UAB alumnus Ameen Barghi. Barghi, who graduated in May and is headed to Oxford University this fall as UAB’s third Rhodes Scholar, is no stranger to moral dilemmas. He was a senior leader on UAB’s Bioethics Bowl team, which won the 2015 national championship. Their winning debates included such topics as the use of clinical trials for Ebola virus, and the ethics of a hypothetical drug that could make people fall in love with each other. In last year’s Ethics Bowl competition, the team argued another provocative question related to autonomous vehicles: If they turn out to be far safer than regular cars, would the government be justified in banning human driving completely? (Their answer, in a nutshell: yes.)
Death in the driver’s seat
So should your self-driving car be programmed to kill you in order to save others?
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- Artificial intelligence in healthcare: The 'do no harm' ethos must be extended to its application in the sectoron December 4, 2020 at 3:42 am
Artificial intelligence (AI) is transforming the way we live and work. But while AI has demonstrated abilities to create efficiencies and boost economic output, it also poses ethical and legal ...
- Google's ethical artificial intelligence team employee fired over an emailon December 4, 2020 at 3:41 am
Gebru said Google terminated her over a separate email she sent to an internal company group named Google Brain Women and Allies. She didn't share the contents of that email ...
- Google scientist Dr Timnit Gebru's exit exposes rift in its ethical AI unit, head Jeff Dean faces criticismon December 4, 2020 at 3:38 am
Following the exit of Dr Timnit Gebru, one of the few Black women in AI division of Google, a group of hundreds of academics and researchers wrote an open letter seeking explanation from unit head Jef ...
- Black Ethical AI Researcher Says Google Fired Her Over A Critical Emailon December 4, 2020 at 12:24 am
Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an email that was allegedly deemed “inconsistent with the expectations of a Google ...
- Top AI ethics researcher says Google fired her; company denies iton December 3, 2020 at 9:52 pm
A top Google scientist on ethical artificial intelligence says she was fired after criticizing the company's diversity efforts, a claim the Alphabet Inc unit disputed on Thursday, in the latest ...
- A Prominent AI Ethics Researcher Says Google Fired Heron December 3, 2020 at 6:08 pm
Timnit Gebru is a leader among those examining the societal impacts of the technology. She had also criticized the company's diversity efforts.
- Google’s star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical emailon December 3, 2020 at 5:35 pm
Timnit Gebru, a star researcher who has criticized the company’s lack of diversity, emailed co-workers that she felt “constantly dehumanized." Her managers, she said, abruptly fired her shortly after.
- Daily Crunch: Google fires co-lead of its Ethical AI teamon December 3, 2020 at 3:16 pm
Google fires a leading researcher, Stripe launches a new banking service and WarnerMedia shakes up the theatrical business model. This is your Daily Crunch for. The big story: Google fires co-lead of ...
- GSA releases finalized federal data skills catalog and ethics frameworkon December 3, 2020 at 1:03 pm
The General Services Administration released a finalized data skills catalog and ethics framework to assist agencies in developing data management competencies and officials in making ethical ...
- Google’s co-lead of Ethical AI team says she was fired for sending an emailon December 3, 2020 at 9:56 am
Timnit Gebru, a leading researcher and voice in the field of ethics and artificial intelligence, says Google fired her for an email she sent to her direct reports. According to Gebru, Google fired her ...
via Bing News