AS DOOMSDAY SCENARIOS go, it does not sound terribly frightening. The “paperclip maximiser” is a thought experiment proposed by Nick Bostrom, a philosopher at Oxford University. Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
The idea of machines that turn on their creators is not new, going back to Mary Shelley’s “Frankenstein” (1818) and earlier; nor is the concept of an AI undergoing an “intelligence explosion” through repeated self-improvement, which was first suggested in 1965. But recent progress in AI has caused renewed concern, and Mr Bostrom has become the best-known proponent of the dangers of advanced AI or, as he prefers to call it, “superintelligence”, the title of his bestselling book.
His interest in AI grew out of his analysis of existential threats to humanity. Unlike pandemic disease, an asteroid strike or a supervolcano, the emergence of superintelligence is something that mankind has some control over. Mr Bostrom’s book prompted Elon Musk to declare that AI is “potentially more dangerous than nukes”. Worries about its safety have also been expressed by Stephen Hawking, a physicist, and Lord Rees, a former head of the Royal Society, Britain’s foremost scientific body. All three of them, and many others in the AI community, signed an open letter calling for research to ensure that AI systems are “robust and beneficial”—ie, do not turn evil. Few would disagree that AI needs to be developed in ways that benefit humanity, but agreement on how to go about it is harder to reach.
Mr Musk thinks openness is the key. He was one of the co-founders in December 2015 of OpenAI, a new research institute with more than $1 billion in funding that will carry out AI research and make all its results public. “We think AI is going to have a massive effect on the future of civilisation, and we’re trying to take the set of actions that will steer that to a good future,” he says. In his view, AI should be as widely distributed as possible. Rogue AIs in science fiction, such as HAL 9000 in “2001: A Space Odyssey” and SKYNET in the “Terminator” films, are big, centralised machines, which is what makes them so dangerous when they turn evil. A more distributed approach will ensure that the benefits of AI are available to everyone, and the consequences less severe if an AI goes bad, Mr Musk argues.
Not everyone agrees with this. Some claim that Mr Musk’s real worry is market concentration—a Facebook or Google monopoly in AI, say—though he dismisses such concerns as “petty”. For the time being, Google, Facebook and other firms are making much of their AI source code and research freely available in any case. And Mr Bostrom is not sure that making AI technology as widely available as possible is necessarily a good thing. In a recent paper he notes that the existence of multiple AIs “does not guarantee that they will act in the interests of humans or remain under human control”, and that proliferation could make the technology harder to control and regulate.
Fears about AIs going rogue are not widely shared by people at the cutting edge of AI research. “A lot of the alarmism comes from people not working directly at the coal face, so they think a lot about more science-fiction scenarios,” says Demis Hassabis of DeepMind. “I don’t think it’s helpful when you use very emotive terms, because it creates hysteria.” Mr Hassabis considers the paperclip scenario to be “unrealistic”, but thinks Mr Bostrom is right to highlight the question of AI motivation. How to specify the right goals and values for AIs, and ensure they remain stable over time, are interesting research questions, he says. (DeepMind has just published a paper with Mr Bostrom’s Future of Humanity Institute about adding “off switches” to AI systems.) A meeting of AI experts held in 2009 in Asilomar, California, also concluded that AI safety was a matter for research, but not immediate concern. The meeting’s venue was significant, because biologists met there in 1975 to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.
Learn more: Frankenstein’s paperclips
The Latest on: Artificial intelligence ethics
via Google News
The Latest on: Artificial intelligence ethics
- Biden Aides’ Ties to Consulting and Investment Firms Pose Ethics Teston November 29, 2020 at 6:15 pm
Some of the president-elect’s choices for top posts have done work for undisclosed corporate clients and aided a fund that invests in government contractors.
- How The Department Of Defense Approaches Ethical AIon November 28, 2020 at 10:00 pm
Military and defense organizations using AI and machine learning can realize tremendous advantages over increasingly capable adversaries and competitors. But there is also a risk of unintended ...
- Ethical AI isn’t the same as trustworthy AI, and that matterson November 28, 2020 at 6:25 pm
Certainly, unethical systems create mistrust. It does not follow, however, that an ethical system will be categorically trusted.
- The ethics and governance of artificial intelligence in the power sectoron November 27, 2020 at 12:50 am
AI is changing processes in the power sector, leading to greater cost savings, increased efficiency and new services for consumers.
- aivancity launches an e-learning platform dedicated to artificial intelligenceon November 26, 2020 at 3:28 am
As a place for training but also a place for open exchanges on society, aivancity is launching its e-learning platform aivancityX, based on the reference open-source platform edX (created by MIT and ...
- Managing ethics and privacy in the age of AIon November 25, 2020 at 3:59 pm
Many Australian organisations are experimenting with artificial intelligence and machine learning technologies to gain new insights from their information to create an advantage over their competitors ...
- Pentagon's AI Ethical Priorities to Be Embedded Into its Coming Data and Development Platformon November 25, 2020 at 12:37 pm
At the heart of potential success in this next and future phases is its in-the-making data and AI development platform: the joint common foundation. On top of providing the department’s less ...
- Cuba favors ethical approach to artificial intelligence at UNESCOon November 25, 2020 at 8:30 am
Cuban Permanent Delegate at UNESCO Yahima Esquivel on Tuesday expressed her country's support for artificial intelligence development from an ethical perspective.
- Black Friday And The Ethics Of AIon November 24, 2020 at 1:17 am
With Black Friday around the corner, many businesses have adopted artificial intelligence (AI) to identify changing consumer trends and optimise sales.
- Ethics Considerations For Law Firms Implementing AIon November 23, 2020 at 10:19 am
Richard Finkelman and Yihua Astle at Berkeley Research Group discuss the ethical and bias concerns law firms must address when implementing artificial intelligence-powered applications for recruiting, ...
via Bing News