Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.
But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms. Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.
But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.
Venkatasubramanian’s research determines if these software algorithms can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
“I’m not saying it’s doing it, but I’m saying there is at least a potential for there to be a problem,” Venkatasubramanian says.
Read more: PROGRAMMING AND PREJUDICE
The Latest on: Bias in Algorithms
via Google News
The Latest on: Bias in Algorithms
- The White House is painting an ugly picture of tech bias against conservatives on May 16, 2019 at 12:08 pm
... company had conducted its own political bias study that suggested there was no political bias on the platform. “Our quality filtering and ranking algorithms do not result in Tweets by Democrats or ... […]
- The White House Is Asking for Signs of Bias on Facebook and Twitter on May 16, 2019 at 10:49 am
What follows is a series of questions, asking for examples of political bias from Facebook ... Facebook and Twitter, in particular, have ramped up spending on improved algorithms and personnel to ... […]
- Trump’s social media bias reporting project is a data collection tool in disguise on May 16, 2019 at 4:00 am
You didn’t appear high enough in search results? Your video wasn’t promoted by an algorithm? You were suspended for threatening to kill someone? It’s all just “bias” now. And for platforms, that ... […]
- 'More than human': Wonders of AI on show in London on May 15, 2019 at 11:05 pm
"If the machine can understand this concept of bias, then it can alert us if it ... "We gave the machine-learning algorithm all of Bach's keyboard works," explained the project's architect Marcus ... […]
- eyeo Announces Investment in AI start-up Factmata on May 13, 2019 at 9:01 pm
It will alert users to hate speech, racism, sexism, bias, sensationalism, one-sidedness or deceptiveness in news articles, using Factmata’s proprietary language-analysis algorithms. This investment ... […]
- Addressing the Biases Plaguing Algorithms on May 13, 2019 at 7:11 am
It’s essential to keep in mind that AI is not fundamentally biased. As we’ve seen, the bias in these algorithms are the result of biased training data built by humans. It’s not the ... […]
- There's a Diversity Crisis in the AI Industry on May 13, 2019 at 6:05 am
For its part, the AI Now Institute makes several recommendations on addressing both diversity in the AI workforce as well as bias in the algorithms themselves. In terms of diversity, the report says ... […]
- Can a browser extension combat political bias in your news diet? on May 13, 2019 at 4:30 am
While acknowledging that in the past few years, increased awareness and activism has forced platforms such as Google and Facebook to update their algorithms to address concerns about consumer privacy, ... […]
- Risk Assessments Used in Criminal Justice Systems Too Often Clouded By Bias, Report Says on May 9, 2019 at 1:07 pm
“But that doesn’t mean that the data is free of bias, or that the human interpretation of it is accurate.” The report focuses on the many courts in the U.S. that use algorithms specifically in the ... […]
- Google spotlights two tools in the fight against bias in AI on May 8, 2019 at 2:48 pm
In short, TCAV is analysis tool which allows developers to inspect their algorithms and find what concepts or big ... That said, TCAV is not necessarily a means within itself to end bias — developers ... […]
via Bing News