The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- Officials long warned of explosive chemicals at Beirut porton August 7, 2020 at 1:43 pm
Documents show warnings were raised at least 10 times over the past six years about the massive stockpile of explosive chemicals stored at Beirut's port ...
- Lebanon president says he knew of chemicals at port in Julyon August 7, 2020 at 9:28 am
Lebanon’s president says he was told nearly three weeks ago about the massive stockpile of explosive chemicals stored at Beirut's port and immediately ordered action taken ...
- Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AIon August 5, 2020 at 2:28 pm
The experts were asked to rank a list of 20 identified threats associated with AI, ranging from driverless car attacks to AI-authored phishing messages and fake news. The criteria for the ranking ...
- Deepfakes are the most worrying AI crime, researchers warnon August 5, 2020 at 4:31 am
Deepfake videos and audio are the most concerning use of AI for crime and terrorism, according to a new report from University College London.
- Puerto Rico added to Chicago’s travel list as city health commissioner warns of coronavirus spread at households, social gatheringson August 4, 2020 at 9:04 am
Chicago officials added Puerto Rico to its stay-at-home list for travelers Tuesday, meaning people coming into the city from there should remain quarantined indoors for two weeks when they arrive.
- Big Tech Curbs Police Use of Facial Recognition Systems Due to Racial Biaseson August 2, 2020 at 2:16 pm
A lot of these algorithms that have been developed, whether they are in healthcare or policing or visual recognition, are basically including the biases of the people who develop them,” Okolo said.
- Ancient 'Acropolis of the sea' opens to divers, guarded by high techon July 30, 2020 at 9:29 am
Hidden and protected for millennia, an ancient shipwreck in Greece opens to the public for the first time on Aug. 3, fusing archaeological wonders in the depths of the sea with the marvels of ...
- How Artificial Intelligence Can Enhance Workplace Safety as Lockdowns Lifton July 29, 2020 at 5:35 am
As employers and workers fear an increase in the risk of transmission and the likelihood of cluster outbreaks, technology advancements pop up.
- Researchers aim to measure the impact of imprecise medical data on AI predictionson July 27, 2020 at 9:18 am
In a preprint study, researchers sought to measure the impact of medical data imprecision on the clinical predictions made by AI systems.
- Artificial Intelligence Platform Detects Power Grid Flaws And Wildfire Dangers Better And Faster Than Humanson July 26, 2020 at 5:05 am
Powerlines and equipment problems have been the cause of most recent wildfires. But a new AI and machine vision technology can analyze millions of images of powerlines and towers to find dangerous ...
via Bing News