In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
The Latest on: Bias in artificial intelligence systems
via Google News
The Latest on: Bias in artificial intelligence systems
- Artificial Intelligence Grew Faster, Better, and More Controversial in 2018. What’s Ahead in 2019? on February 5, 2019 at 4:45 am
The concept of artificial intelligence has been around for centuries ... A concept that you're likely to hear more about in the coming months is algorithm bias, the idea that a system will adopt the b... […]
- IBM hopes 1 million diverse faces can reduce bias in AI on January 29, 2019 at 2:01 pm
This comes after artificial intelligence in facial recognition systems has reportedly shown bias. Last week, an MIT study revealed that Amazon's Rekognition tech had a harder time recognizing the gend... […]
- IBM hopes 1 million faces will help fight bias in facial recognition on January 29, 2019 at 9:35 am
Researchers at the company hope that these specific details will help developers train their artificial intelligence-powered facial recognition systems to identify faces more fairly and accurately. "F... […]
- Researchers find gender and racial bias in Amazon’s facial recognition software, widely used by cops on January 27, 2019 at 3:34 pm
Facial recognition systems are increasingly common ... Amazon spokesperson Matt Wood, a general manager of artificial intelligence at Amazon Web Services, provided various media outlets with a stateme... […]
- Response: Racial and Gender bias in Amazon Rekognition — Commercial AI System for Analyzing Faces. on January 25, 2019 at 11:11 am
In our recent study of bias in commercial facial analysis ... general manager of artificial intelligence at Amazon Web Services, about our study. I also address other criticisms of the study ... […]
- Artificial intelligence will become the next new human right on January 24, 2019 at 5:23 am
"we could have artificial intelligence systems that detect and correct bias in data, rather than doubling down on human bias; we have automation that takes people out of dangerous and degrading jobs, ... […]
- Accenture Launches New Artificial Intelligence Testing Services on December 19, 2018 at 9:19 am
Powered by a “Teach and Test” methodology, the new services help companies validate the safety, reliability and transparency of their artificial intelligence systems NEW YORK ... imparting knowledge w... […]
- The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence on December 6, 2018 at 4:00 pm
One is about artificial intelligence — the golden ... Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly o... […]
- Diversity, Inclusion & Bias in Artificial Intelligence on November 12, 2018 at 3:00 am
Even the most accurate systems have gender and racial bias relating to tasks, occupations, or simple detection. Artificial Intelligence IS Political. One of the most important concepts I learned was f... […]
via Bing News