AS DOOMSDAY SCENARIOS go, it does not sound terribly frightening. The “paperclip maximiser” is a thought experiment proposed by Nick Bostrom, a philosopher at Oxford University. Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
The idea of machines that turn on their creators is not new, going back to Mary Shelley’s “Frankenstein” (1818) and earlier; nor is the concept of an AI undergoing an “intelligence explosion” through repeated self-improvement, which was first suggested in 1965. But recent progress in AI has caused renewed concern, and Mr Bostrom has become the best-known proponent of the dangers of advanced AI or, as he prefers to call it, “superintelligence”, the title of his bestselling book.
His interest in AI grew out of his analysis of existential threats to humanity. Unlike pandemic disease, an asteroid strike or a supervolcano, the emergence of superintelligence is something that mankind has some control over. Mr Bostrom’s book prompted Elon Musk to declare that AI is “potentially more dangerous than nukes”. Worries about its safety have also been expressed by Stephen Hawking, a physicist, and Lord Rees, a former head of the Royal Society, Britain’s foremost scientific body. All three of them, and many others in the AI community, signed an open letter calling for research to ensure that AI systems are “robust and beneficial”—ie, do not turn evil. Few would disagree that AI needs to be developed in ways that benefit humanity, but agreement on how to go about it is harder to reach.
Mr Musk thinks openness is the key. He was one of the co-founders in December 2015 of OpenAI, a new research institute with more than $1 billion in funding that will carry out AI research and make all its results public. “We think AI is going to have a massive effect on the future of civilisation, and we’re trying to take the set of actions that will steer that to a good future,” he says. In his view, AI should be as widely distributed as possible. Rogue AIs in science fiction, such as HAL 9000 in “2001: A Space Odyssey” and SKYNET in the “Terminator” films, are big, centralised machines, which is what makes them so dangerous when they turn evil. A more distributed approach will ensure that the benefits of AI are available to everyone, and the consequences less severe if an AI goes bad, Mr Musk argues.
Not everyone agrees with this. Some claim that Mr Musk’s real worry is market concentration—a Facebook or Google monopoly in AI, say—though he dismisses such concerns as “petty”. For the time being, Google, Facebook and other firms are making much of their AI source code and research freely available in any case. And Mr Bostrom is not sure that making AI technology as widely available as possible is necessarily a good thing. In a recent paper he notes that the existence of multiple AIs “does not guarantee that they will act in the interests of humans or remain under human control”, and that proliferation could make the technology harder to control and regulate.
Fears about AIs going rogue are not widely shared by people at the cutting edge of AI research. “A lot of the alarmism comes from people not working directly at the coal face, so they think a lot about more science-fiction scenarios,” says Demis Hassabis of DeepMind. “I don’t think it’s helpful when you use very emotive terms, because it creates hysteria.” Mr Hassabis considers the paperclip scenario to be “unrealistic”, but thinks Mr Bostrom is right to highlight the question of AI motivation. How to specify the right goals and values for AIs, and ensure they remain stable over time, are interesting research questions, he says. (DeepMind has just published a paper with Mr Bostrom’s Future of Humanity Institute about adding “off switches” to AI systems.) A meeting of AI experts held in 2009 in Asilomar, California, also concluded that AI safety was a matter for research, but not immediate concern. The meeting’s venue was significant, because biologists met there in 1975 to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.
Learn more: Frankenstein’s paperclips
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- Do You Need Intelligence to Get Through University?on May 12, 2020 at 6:00 am
In the age of artificial intelligence (AI), civilization has adopted a new definition of intelligence that no longer requires thinking and even less, self-awareness. This trend has found its supreme ...
- Police forces warned on adopting AI 'without consultation or ethical safeguards'on May 11, 2020 at 10:18 pm
RSA’s Asheem Singh and Will Grimond, says that AI technology offers huge potential to improve policing, but must be carried out “for purposes of improving police work rather than simply as a ...
- Papal academy fosters ethics, alliances in AI debateon May 11, 2020 at 1:21 pm
Given the huge impact technology and artificial intelligence will have on humanity and the environment, the Pontifical Academy for Life is encouraging more ethical and moral reflection in the ...
- Papal academy fosters ethics, multidisciplinary alliances in AI debateon May 11, 2020 at 8:44 am
The Pontifical Academy for Life "strives to interweave faith with science and technology, to identify paths for multiple voices to walk alongside one another." ...
- Blindly using data to make decisions doesn’t create ethical AI systemson May 11, 2020 at 6:23 am
Kiran Krishnamurthy, an AI domain specialist at the Centre for Modelling & Simulation, discusses why ethical AI is so important for manufacturing companies.
- Indian govt rejects French ethical hackers claims, confirms Aarogya Setu app has no security breachon May 5, 2020 at 10:37 pm
The Indian government on Wednesday said its coronavirus tracking app Aarogya Setu has no security breach. The government made the comments after an alleged French ethical hacker said the privacy of 90 ...
- A Perspective on Ethics & Law in AI and Big Data Analyticson May 5, 2020 at 10:02 am
Big Data has demonstrated great potential for its use as artificial intelligence in a number of different roles, such as efficient data management in legal activity. Utilizing technological ...
- Ensuring the Pentagon follows ethics for artificial intelligenceon May 3, 2020 at 6:08 pm
By thinking hard about the Pentagon's new principles for artificial intelligence, the department and agencies can ensure those ethics are followed and that the next generation of AI technology is ...
- Advertising 2030 – Marketing Future Shaped By AI, Data Ethics & Environmental Responsibilityon May 3, 2020 at 3:00 pm
The future of marketing crystal balled without a slightly frightening eastern European woman with moustache issues.
- Artificial intelligence and embedded computing for unmanned vehicleson May 1, 2020 at 10:52 am
The latest generation of unmanned vehicles operating on land, in the air, and at sea no longer simply are remotely operated. These advanced systems have built-in intelligence to learn from their ...
via Bing News