The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- How Can We Tackle The Problem Of Bias In Artificial Intelligence? on February 6, 2019 at 10:34 pm
The “democratization of AI” undoubtedly has the potential to do a lot of good, by putting intelligent, self-learning software in the hands of us all. But there’s also a very real danger that without p... […]
- In growing shadow of big data, Mass. lawmakers are working to shine a light on February 6, 2019 at 9:00 pm
George Soros is right to call artificial intelligence technologies “a mortal danger” for open societies like ours (“China and the AI threat to open societies,” Opinion, Feb. 4). In the United States, ... […]
- 4 key takeaways from Director of National Intelligence Dan Coats testimony about Iran on February 5, 2019 at 3:00 am
from deepening Chinese authoritarianism to the proliferation of “disruptive technologies” like artificial intelligence. But the survey is valuable for another reason as well. It provides us with a det... […]
- Why data, not privacy, is the real danger on February 4, 2019 at 11:52 am
Jacob Ward Jacob Ward is a technology correspondent for NBC News, and a 2018-2019 Berggruen Fellow at Stanford University’s Center for Advanced Study in the Behavioral Sciences, where he’s writing a b... […]
- China and the AI threat to open societies on February 3, 2019 at 10:53 pm
I want to warn the world about an unprecedented danger that’s threatening the survival of open societies. The rapidly improving instruments of control that machine-learning and artificial intelligence ... […]
- The Monitoring Game: China’s Artificial Intelligence Push – OpEd on February 3, 2019 at 8:26 pm
AI has been seen to be a fabulous toy-like thing ... Bengio seems a bit late to the commentary on this point, given the prevailing dangers posed by existing technologies in the private sector in the f... […]
- Build a 5-Star Customer Experience With Artificial Intelligence on February 3, 2019 at 7:50 pm
But these advances, designed to facilitate customer interaction and service in new ways, can present a danger. Companies that embrace ... With the advent of artificial intelligence (AI), companies can ... […]
- Could AI-Powered Traffic Cameras Finally Stop Distracted Driving? on February 3, 2019 at 8:22 am
While driverless cars may eventually free us to spend our commutes entirely on our phones, in the meantime, could AI-powered traffic cameras finally rid of the dangers of distracted drivers? The quain... […]
- Why Are We For Globalization But Against The AI Revolution? on February 1, 2019 at 8:10 pm
We speak of AI’s potential to cause mass job displacement as an existential danger that must be addressed with Universal Basic Income or legal protections, while lauding the disruptive job loss of glo... […]
- Pushing AI Into The Mainstream on January 31, 2019 at 12:27 am
“Legal, political and regulatory issues surrounding AI haven’t been properly addressed in governments yet,” says Sharad Singh, digital marketing engineer for Allied Analytics. “There lies the greatest ... […]
via Bing News