Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.
But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms. Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.
But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.
Venkatasubramanian’s research determines if these software algorithms can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
“I’m not saying it’s doing it, but I’m saying there is at least a potential for there to be a problem,” Venkatasubramanian says.
Read more: PROGRAMMING AND PREJUDICE
The Latest on: Bias in Algorithms
via Google News
The Latest on: Bias in Algorithms
- Artificial intelligence is not free from biason September 8, 2019 at 3:16 am
“We have a bias detector and we have a product to find out why bias ... there could be potential problems that arose from a faulty one. “It could be that your algorithm is firing all the fat people in ...
- How the Algorithms Running Your Life Are Biasedon September 7, 2019 at 11:00 pm
Instead, researchers are finding that algorithms -- the logic at the heart of software programs -- can replicate and even amplify the prejudices of those who create them. The U.S. Congress is weighing ...
- Skeptic Check: Data Biason September 4, 2019 at 1:09 pm
Human bias also infects artificial intelligence, with speech recognition triggered only by male voices and facial recognition that can’t see black faces. We question the assumptions baked into these ...
- Inmar’s Collective Bias Expands its CPV Pricing Model to Blog and YouTubeon September 4, 2019 at 4:00 am
Inmar’s Collective Bias uses its FitScore™ matching algorithm to determine the best influencers for a campaign, based on predictive modeling that forecasts how a given influencer will contribute to a ...
- A Century of “Shrill”: How Bias in Technology Has Hurt Women’s Voiceson September 4, 2019 at 12:53 am
Even today, many data-compression algorithms and bluetooth speakers disproportionately ... and small budgets of the campaign trail). But technological bias is about more than audio quality—it’s about ...
- How MIT is trying to resolve AI biason September 3, 2019 at 5:46 am
Tonya Hall talks with Dr. Aleksander Madry, associate professor of computer science at MIT, about what is being done to resolve bias and error in computer vision algorithms.
- Making equitable access to credit a reality in the age of algorithmson August 30, 2019 at 12:45 pm
The views expressed by contributors are their own and not the view of The Hill This week we saw yet another reminder of the ways algorithms will perpetuate historical bias if left unchecked and ...
- Can we create better algorithms for screening candidates - and reduce hiring bias?on August 30, 2019 at 9:00 am
A new research paper from Georgia Tech takes a surprising position on algorithmic bias in hiring. Their view: we can reduce screening bias if algorithms take the impacted demographic groups into ...
- Can an algorithm eradicate bias in our decision making?on August 28, 2019 at 11:57 pm
It’s tempting to assume that artificial intelligence and machine learning can ensure HR’s decisions in key areas such as recruitment and performance management are completely unbiased. But there are ...
- How should big tech police content while avoiding bias?on August 28, 2019 at 7:36 am
That shouldn't be a tradeoff." The bias issue has dogged Silicon Valley for years, though there's been no credible evidence that political leanings factor into Google's search algorithms or what users ...
via Bing News