In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
The Latest on: Bias in artificial intelligence systems
via Google News
The Latest on: Bias in artificial intelligence systems
- “Good” and “Safe” Data – Panel Launches Artificial Intelligence Initiative on May 16, 2019 at 12:18 pm
Senior Attorney in Artificial Intelligence and Research at Microsoft. The panel dissected hot-button issues related to data acquisition, data use, the need for making the AI black box more transparent ... […]
- Artificial intelligence is selecting grant reviewers in China on May 14, 2019 at 1:16 am
Several academic publishers are experimenting with artificial intelligence (AI ... thinks AI will not remove selection bias. She fears that AI systems end up replicating the biases ingrained in human ... […]
- Setting a precedent in the use of artificial intelligence on May 10, 2019 at 5:51 am
Artificial intelligence (AI) could also use the enormous volume of data available on sentencing decisions to identify bias and give guidance ... erodes trust in the system. The authors cite ... […]
- Artificial Intelligence in Government Act Reintroduced on May 8, 2019 at 2:13 pm
“As both an adopter and regulator of artificial intelligence systems, it is crucial that the ... and avoid pitfalls like bias in automated decisions,” said Chris Calabrese, vice president ... […]
- Melinda Gates: With so few women in AI, we are baking bias into the system on May 6, 2019 at 5:16 am
In the world of artificial intelligence, an industry that plays an increasingly influential role in our buying, hiring and other key decisions, Gates noted that the number of women is "so small it's ... […]
- When Artificial Intelligence Knows Too Much (or Too Little) About You on May 1, 2019 at 3:18 pm
The prolific writer (and agitator) has long been a critic of artificial intelligence ... discussed problems of bias in AI applications. Much of Buolamwini’s work has centered on the inability of ... […]
- Artificial Intelligence and Algorithms: 21st Century Tools for Racism on April 30, 2019 at 7:44 am
Algorithms are inextricably linked with Artificial Intelligence ... measures to combat bias in AI. At its developers’ conference last year, a Facebook executive said that it is focused on “how to ... […]
- Can AI save us from bias in the diverse, nonbinary workforce of the future? on April 26, 2019 at 11:55 am
Consequently, startups and big companies have attempted to tap emerging technologies like artificial intelligence to improve ... “It would be possible to create biased systems. We wanted to make sure ... […]
- Artificial intelligence: Algorithms face scrutiny over potential bias on March 20, 2019 at 5:58 pm
But services using the artificial intelligence already ... "We know there is potential for bias but that is not the same as admitting that there are flaws in the system already." The government has ... […]
via Bing News