In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
The Latest on: Bias in artificial intelligence systems
via Google News
The Latest on: Bias in artificial intelligence systems
- Edward Santow launches guide on how to wipe out bias in artificial intelligenceon November 23, 2020 at 5:00 am
Bias in AI is looming as a significant issue for Australian businesses, according to Human Rights Commissioner Edward Santow, who has launched a new technical paper offering practical guidance for ...
- New York City wants to restrict artificial intelligence in hiringon November 20, 2020 at 1:08 pm
New York City is trying to rein in the use of algorithms used to screen job applicants. It's one of the first cities in the U.S. to try to regulate what is an increasingly common — and opaque — hiring ...
- Documentary 'Coded Bias' Unmasks The Racism Of Artificial Intelligenceon November 18, 2020 at 2:10 am
Energized by a nearly all-female lineup of researchers and activists, the film condemns the ways racism and classism underpin big data’s design and applications.
- How effective is artificial intelligence in removing racial bias in hiring?on November 14, 2020 at 5:51 am
A new crop of tech companies believes that AI can solve the problem. Founded in 2013, Pymetrics describes itself as a “human-centered AI platform with the vision to realize everyone’s potential with ...
- How Artificial Intelligence Becomes Raciston November 12, 2020 at 10:24 am
The new documentary Coded Bias explores how racism is written into ... who uncovered how facial scanning systems have difficulty recognizing nonmale and especially nonwhite faces.
- Zest AI CEO: Artificial Intelligence Is Reshaping Lending and Crediton November 10, 2020 at 2:11 pm
Adding artificial intelligence (AI) and machine learning (ML) to loan decisions can drive banks' profits and also eliminate any bias, Zest AI tells PYMNTS.
- New Artificial Intelligence tool speeds up biology research, reduces human biason November 3, 2020 at 5:12 pm
In a new study, scientists have developed an AI (artificial intelligence ... simple user interface that works on different operating systems," concludes senior author Nikos Hatzakis, Associate ...
- Keeping the robots trustworthy: The ethics of artificial intelligenceon November 3, 2020 at 11:28 am
insurance company executives have been exploring how artificial intelligence (AI) can help automate or optimize decision-making across the policy lifecycle and claims process. With a better ...
- Implementing Artificial Intelligence Requires Diverse Sets of Data to Avoid Biaseson November 3, 2020 at 8:29 am
In-house counsel should work with their engineers and executives to fully understand what data sets go into artificial intelligence systems to eliminate unconscious bias and drive better business ...
via Bing News