The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- The 5 greatest dangers of artificial intelligence on February 19, 2019 at 2:09 pm
Artificial intelligence makes life easier and more comfortable, or at least it should in a lot of cases. However, there are downsides to it - AI comes with many dangers. Of course, we don't need to fe... […]
- This AI is so good at writing that its creators won't let you use it on February 19, 2019 at 11:41 am
"I roll my eyes at that, frankly," said Christopher Manning, a Stanford professor and director of the Stanford Artificial Intelligence Lab. Manning said that while we shouldn't be naïve about the dang... […]
- OpenAI, Former Elon Musk Firm, Is on the Brink of a New A.I. Era on February 19, 2019 at 11:25 am
The firm hopes to spark a nuanced discussion about the dangers of artificial intelligence, a move that could avoid some of the disaster scenarios envisioned by Musk. However, even the company admits t... […]
- Scholar explores impact of bias in facial-recognition software on February 19, 2019 at 9:52 am
McBride delivered the general introduction Feb. 7 to a considerable crowd in White Hall, he mentioned some of the major challenges of artificial intelligence (AI), including ... and the title of her l... […]
- Will AI Achieve Consciousness? Wrong Question on February 19, 2019 at 4:00 am
... foresaw several problems with this incipient state of affairs that Alan Turing and other early AI optimists largely overlooked. The real danger, he said, is that such machines, though helpless by ... […]
- Snapshot 2019: The State Of AI In Retail on February 14, 2019 at 9:50 am
And the moment employees catch wind that the AI is “wrong”, then your AI project is sunk. Ethical Dangers for Retail AI AI developers and researchers are grappling with how to make AI ethical – how to ... […]
- OpenAI’s new multitalented AI writes, translates, and slanders on February 14, 2019 at 9:00 am
OpenAI’s hypothesis is it’s better to talk about AI dangers “before they arrive” Clark says that language modeling algorithms like GPT-2 aren’t as mature as deepfakes, but they’re close enough to warr... […]
- The Technology 202: Pentagon promises to make ethics a priority in first AI strategy on February 13, 2019 at 6:00 am
The decision to make this a core part of its strategy indicates the Pentagon has taken notice of the warnings that top technologists like Elon Musk have made for years about the dangers of building AI ... […]
- Ex-gov’t agent: Crisis worse than 9/11 could come out of AI arms race on February 12, 2019 at 10:56 am
However, Meltzer said that neither Trump nor Warner honed-in on the true potential dangers of AI. Rather, the true dangers were that once AI starts taking over a large number of societal functions, da... […]
- Are we in danger of over-estimating AI? on February 12, 2019 at 9:36 am
“AI is just fancy maths” according to innovation specialist, digital ethicist and self-proclaimed ‘data philosopher’ Charles Radclyffe. He was speaking on a panel discussion that kicked off The Drum B... […]
via Bing News