The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
Olah uses a cleaning robot to illustrate some of his five points. One area of concern is in preventing systems from achieving their objectives by cheating. For example, the cleaning robot might discover it can satisfy its programming to clean up stains by hiding them instead of actually removing them.
Another of the problems posed is how to make robots able to explore new environments safely. For example, a cleaning robot should be able to experiment with new ways to use cleaning tools, but not try using a wet mop on an electrical outlet.
Olah describes the five problems in a new paper authored with two Google colleagues as well as researchers from Stanford University, the University of California, Berkeley, and OpenAI, a research institute cofounded and partially funded by Tesla CEO and serial entrepreneur Elon Musk.
Musk, who once likened working on artificial intelligence to “summoning the demon,” made creating “safe AI” one of OpenAI’s founding goals (see “What Will It Take to Build a Virtuous AI?”).
Learn more: Google Gets Practical about the Dangers of AI
The Latest on: Dangers of AI
via Google News
The Latest on: Dangers of AI
- Group Says 72% Gender Gap In Artificial Intelligence Industry Could Worsen Historic Gender Biason December 12, 2019 at 4:54 pm
A global women’s network says the biggest danger to the development of Artificial Intelligence (AI) is the years ahead is the fact that women represent only about 22 percent of all AI professionals ...
- Wikipedia Founder Jimmy Wales is Taking on Facebook and the Dangers Lurking in the Rise of Artificial Intelligenceon December 12, 2019 at 2:00 am
In recent years, Facebook has been using artificial intelligence to make inferences about users, the better to keep them engaged with the site—or, as some critics assert, addicted to it. Critics also ...
- Parents have been warned by experts about the dangers of children’s toys that can be hacked by strangers.on December 11, 2019 at 5:56 am
Product testers found that strangers could hack into a £30 V-tech KidiGear walkie talkie and have two-way conversations with children. Which? said an artificial intelligence robot, called Boxer Robot ...
- FDA to probe public risks of AI’s ever-growing role in radiologyon December 10, 2019 at 1:53 pm
As such, the Food and Drug Administration (FDA) is planning a two-day public workshop in February to explore AI’s explosion across the specialty. Federal officials want to get ahead of any possible ...
- FLY THE FLAG Celebrates the 71st Anniversary Of The Universal Declaration Of Human Rightson December 10, 2019 at 2:43 am
In a unique collaboration, a wide-ranging group of arts organisations and human rights charities commissioned Ai Weiwei to design a new flag in response to the real and present dangers of a world ...
- Harvard Law School researchers call for more regulation in medical AIon December 9, 2019 at 1:44 pm
based Harvard Law School's Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics, and Fontainebleau, France-based INSEAD described the dangers of leaving medical machine learning ...
- Eric Schmidt and Bob Work: The US is in danger of losing its global leadership in AIon December 5, 2019 at 3:00 pm
Our work is not complete, but our initial assessment is worth sharing now: in the next decade, the United States is in danger of losing its global leadership in AI and its innovation edge. That edge ...
- “Animals & Climate Change” at the heart of the 13th International Science Film Festival of Athenson December 4, 2019 at 3:47 am
artificial intelligence, astronomy and astrophysics, technology and archaeology. This year, the festival’s theme is “Animals & Climate Change” and focuses on the evolution of technology and ...
- RSNA 2019: Disruption wrought by AI to bring opportunity, danger to radiologyon December 2, 2019 at 8:49 am
Artificial intelligence is going to lower both the costs of supplying radiology services and the payments made for them. However, if practices make the right moves around technology, the business ...
- UnitedLex CEO warns 'over-hyped' AI is in danger of 'losing meaning'on December 2, 2019 at 4:22 am
... its potential UnitedLex chief executive Dan Reed has warned that Artificial Intelligence (AI) in the legal market is “over-hyped” and “in danger of losing its meaning”. Writing in the UK’s The ...
via Bing News