Imagine you are in charge of the switch on a trolley track.
The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?
This ethical puzzler is commonly known as the Trolley Problem. It’s a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st century take, several writers (here and here, for example) have adapted the scenario to a modern obsession: autonomous vehicles. Google’s self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on “major roads” by this summer.
Who watches the watchers?
The technology may have arrived, but are we ready?
Google’s cars can already handle real-world hazards, such as cars’ suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google’s cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
“Ultimately, this problem devolves into a choice between utilitarianism and deontology,” said UAB alumnus Ameen Barghi. Barghi, who graduated in May and is headed to Oxford University this fall as UAB’s third Rhodes Scholar, is no stranger to moral dilemmas. He was a senior leader on UAB’s Bioethics Bowl team, which won the 2015 national championship. Their winning debates included such topics as the use of clinical trials for Ebola virus, and the ethics of a hypothetical drug that could make people fall in love with each other. In last year’s Ethics Bowl competition, the team argued another provocative question related to autonomous vehicles: If they turn out to be far safer than regular cars, would the government be justified in banning human driving completely? (Their answer, in a nutshell: yes.)
Death in the driver’s seat
So should your self-driving car be programmed to kill you in order to save others?
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- New centre to address AI and digital ethicson April 7, 2020 at 4:40 am
A new centre for artificial intelligence (AI) and digital ethics has been launched by the University Melbourne to address ethical, policy and legal challenges posed by new technologies. Combining ...
- Meet the Xenobots, virtual creatures brought to lifeon April 6, 2020 at 6:09 pm
The implications of their existence could spill from artificial-intelligence research to fundamental questions in biology and ethics. “We are witnessing almost the birth of a new discipline of ...
- Joint Artificial Intelligence Center to Train “AI Champions”on April 6, 2020 at 11:30 am
The Joint Artificial Intelligence Center in the Department of Defense will be training individuals to implement and champion the AI principles which the department adopted. The center, known as JAIC, ...
- Artificial Or Human Intelligence? Companies Faking AIon April 4, 2020 at 1:37 pm
Some companies are choosing to approach these AI challenges not by scaling back their AI ambitions, but rather by using humans to do the task that they are otherwise trying to get their AI systems to ...
- Expert source on ethical AI in the workplace and hiringon April 4, 2020 at 8:21 am
The debate on artificial intelligence in the workplace keeps on hotting up, especially over whether if it is benefiting humans or harming them. Rick Britt, VP of Artificial Intelligence at CallMiner ...
- Turning brain activity to text with AI: Time for nations to debate what is ethical, and what isn’ton April 3, 2020 at 3:05 pm
A few years ago, when researchers successfully demonstrated the use of augmented reality (AR) to treat PTSD by examining which parts of the brain it impacted, no one would have thought that scientists ...
- A CIO’s guide to AI: Australian artificial intelligence suffering from arrested developmenton March 30, 2020 at 10:23 pm
Australia is lagging other developed economies in the deployment of artificial intelligence technologies thanks to a perfect storm of lower confidence levels, skill shortages and higher levels of ...
- Bosch sets company guidelines for use of artificial intelligence in productson March 30, 2020 at 7:42 am
To build trust among humans about artificial intelligence, Bosch AG, a German multinational engineering company, has issued ethical “red lines” for the use of AI in its products. The company’s AI code ...
- Ethics of data scientists more important than thefton March 29, 2020 at 5:00 pm
From doctors getting help from machines to Artificial Intelligence being used in Defence—the impact of data science is ... While theft and privacy are definitely important, making sure that data ...
- As adoption of artificial intelligence accelerates, can the technology be trusted?on March 26, 2020 at 1:39 pm
The list of concerns around the use of artificial intelligence seems to grow with every passing week ... “We want to build an ecosystem of trust,” Francesca Rossi, AI ethics global leader at IBM Corp.
via Bing News