The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- U.N. decries police use of racial profiling derived from Big Dataon November 26, 2020 at 1:38 pm
Police and border guards must combat racial profiling and ensure that their use of "big data" collected via artificial intelligence does not reinforce biases against minorities, United Nations experts ...
- Future Visioning the Role of CRISPR Gene Editing: Navigating Law and Ethics to Regenerate Health and Cure Diseaseon November 24, 2020 at 9:17 am
Given the global pandemic, the public sector must work hand-in-glove with private industry and academia to develop new therapies and vaccines . This post will envision the future of gene editing ...
- Scoop: Coming Up on a New Episode of NEXT on FOX - Tuesday, December 1, 2020on November 23, 2020 at 6:43 am
Also, C.M. and Gina escape the hospital and learn more about each other, Ted makes a risky deal to take over a server farm in Singapore and he uses Abby to access LeBlanc's NEXT hard drive in the ...
- If You Aren’t Using AI, You’re Falling Behind According To The U.S. Patent And Trademark Officeon November 20, 2020 at 8:22 am
CIOs continue to talk about how vital AI technologies are, but this new report confirms that if companies aren't already putting that talk into action, they are behind the curve.
- Top intel official warns of bias in military algorithmson November 20, 2020 at 4:23 am
The Air Force's top intelligence officer warned of the dangers of using small or specific sets of data to train algorithms.
- Unlock The Full Potential Of Artificial Intelligenceon November 20, 2020 at 3:05 am
The past year has been a learning experience for all of us, and collectively we’ve managed incredible things. Out of necessity, organizations have massively increased their speed of digitization and ...
- Bernard Marr: Unlock The Full Potential Of Artificial Intelligenceon November 20, 2020 at 3:02 am
Clearly, there’s nothing like danger for focusing the mind and encouraging human beings ... when it comes to hugely disruptive technologies like AI and cloud computing, is still catching up with the ...
- Writer/Producer Manny Coto talks neXt, end of Dexter, Star Trek, 24 and more - Exclusive interviewon November 18, 2020 at 5:39 am
In a wide-ranging interview with Looper, Coto broke down neXt, running a Star Trek series, his dissatisfaction with late-period Dexter, and his propensity for bringing in familiar talent.
- NIST methodically releasing guidance on trustworthy AIon November 11, 2020 at 4:00 pm
The agency hopes to establish standards accepted by the international AI community but needs more time to understand the dangers of bias within data and algorithms, as well as how to measure it, ...
- Why Those Puddles On The Roadway Can Be Startling Hazardous For Self-Driving Carson November 8, 2020 at 6:58 am
AI that is advanced sufficiently can do the same ... based edge devices set up along the road that will transmit the dangers via V2I to any self-driving cars coming along that stretch.
via Bing News