The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- AI-Driven Predictive Analytics: New Opportunities for Financial Institutions on April 18, 2019 at 8:05 am
to a vision of AI that includes all automation.” The same survey warns that “there is a danger that too much investment flows into “hyped” areas such as chatbots at the expense of ... […]
- Panelists tell UN expert that artificial intelligence offers promise and peril for social programs on April 18, 2019 at 7:43 am
“It can be common in these discussions to focus all of the attention on the negative implications, the risks and dangers of adopting AI in all sorts of programs, and we really must not lose the fact ... […]
- The Danger Of Having A Growth Mindset on April 18, 2019 at 5:46 am
The danger is when you let learning become a form of procrastination ... Kevin Kruse is CEO+Founder of LEADx.org, an AI-powered leadership success platform for the modern workforce. […]
- How AI helps insurers convert risk into relationships on April 18, 2019 at 4:21 am
Thankfully, for many today, Artificial Intelligence (AI) is helping solve this problem ... always paying attention and assessing risk factors or danger. He uses years of experience to sense prodromes ... […]
- AI-powered surveys: Hyped or helpful? on April 17, 2019 at 6:34 am
Where AI does help is in creating an experience that ... the results are not to be trusted because they run the likely danger of missing underlying meaning. The final claim we need to address ... […]
- The artificial intelligence field is too white and too male, researchers say on April 16, 2019 at 6:00 pm
Diversity, while a hurdle across the tech industry, presents specific dangers in AI, where potentially biased technology, like facial recognition, can disproportionately affect historically ... […]
- HIT, AI, And Machine Learning: A Reality Check on April 16, 2019 at 3:11 am
Weighing the pros and cons of AI, Eliezer Yudkowsky—cofounder of the Machine Intelligence Research Institute—might have summed it up best: “By far, the greatest danger of Artificial Intelligence is ... […]
- The Real Dangers of an AI Arms Race on April 15, 2019 at 9:20 pm
PAUL SCHARRE is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the author of Army of None: Autonomous Weapons and the ... […]
- ERLC: Benefits of AI are great, but dangers are real on April 15, 2019 at 12:03 pm
It used to be the stuff of Hollywood science fiction, but artificial intelligence is coming – and there may be no stopping it. Southern Baptists, however, are trying to get out in front of the ... […]
- Unleashing the Value of Health Data in the Era of Artificial Intelligence on April 15, 2019 at 5:00 am
Few adages sum up the challenges of bringing artificial intelligence to the healthcare industry better ... Roy Amara’s insight into the ups and downs of innovation shows the dangers of thinking ... […]
via Bing News