Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.
But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms. Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.
But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.
Venkatasubramanian’s research determines if these software algorithms can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
“I’m not saying it’s doing it, but I’m saying there is at least a potential for there to be a problem,” Venkatasubramanian says.
Read more: PROGRAMMING AND PREJUDICE
The Latest on: Bias in Algorithms
via Google News
The Latest on: Bias in Algorithms
- SNL producer and film-maker are latest to accuse YouTube of anti-LGBT biason November 22, 2019 at 3:00 am
The recording they made of the phone call is a key piece of evidence in the case. None of the plaintiffs are sure why the algorithm would have a built-in bias against them. Some think it is to appease ...
- Goldman CEO denies Apple Card gender bias: ‘Different applicants can get different results’on November 21, 2019 at 7:15 pm
Speaking with Bloomberg TV, he also insisted this week: “There’s no gender bias in our process for extending credit ... whether New York law was violated and ensure all consumers are treated equally ...
- DataRobot Reports That Nearly Half of AI Professionals Are Very to Extremely Concerned About AI Biason November 21, 2019 at 4:10 pm
New report finds that developing trustworthy algorithms and determining what data is used to train AI are among top challenges faced in eliminating AI bias DataRobot, the leader in enterprise AI, ...
- Regulators warn banks about bias risks in algorithmson November 21, 2019 at 12:55 pm
Lenders say algorithms in lending decisions remove bias and make the loan application process quicker, cheaper, and more efficient when pricing credit. Meanwhile, regulators who enforce fair-lending ...
- Researchers Want Guardrails to Help Prevent Bias in AIon November 21, 2019 at 6:00 am
But even the smartest algorithms can sometimes behave in unexpected and unwanted ways, for example picking up gender bias from the text or images they are fed. A new framework for building AI programs ...
- Goldman Sachs CEO Says 'No Gender Bias' In Apple Cardon November 21, 2019 at 3:00 am
"There's no gender bias in our process for extending credit," Solomon told Bloomberg TV on the sidelines ... David Heinemeier Hansson tweeted that the Apple Card is a "sexist" programme. He said the ...
- The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good.on November 15, 2019 at 6:00 am
Here’s a look at the week’s tech news: I’m about to suggest something that sounds controversial: Maybe it’s good that we keep discovering biased algorithms? Let’s rewind. A pair of articles this past ...
- Long Islanders weigh in on alleged sex bias in a credit card algorithmon November 12, 2019 at 12:35 pm
Allegations of gender bias in an algorithm used in issuing the new Apple Card are setting Long Islanders atwitter and prompting an inquiry by a state agency. The controversy gained momentum from a ...
- New York regulator probes Apple Card algorithms for gender bias after viral tweetson November 11, 2019 at 12:49 pm
The use of algorithms by lenders in credit decisions has drawn scrutiny in Congress. In June, the House Financial Services Committee heard about examples of algorithmic decision-making where ...
- Apple Card algorithm sparks gender bias allegations against Goldman Sachson November 11, 2019 at 7:51 am
Past iterations of Google Translate have struggled with gender bias in translations. Amazon was forced to jettison an experimental recruiting tool in 2017 that used artificial intelligence to score ...
via Bing News