Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.
But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms. Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.
But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.
Venkatasubramanian’s research determines if these software algorithms can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
“I’m not saying it’s doing it, but I’m saying there is at least a potential for there to be a problem,” Venkatasubramanian says.
Read more: PROGRAMMING AND PREJUDICE
The Latest on: Bias in Algorithms
via Google News
The Latest on: Bias in Algorithms
- AI Weekly: Nvidia’s Maxine opens the door to deepfakes and bias in video callson October 9, 2020 at 1:50 pm
Nvidia's Maxine service leverages AI to ostensibly enhance videoconferencing experiences, but it opens the door to bias and deepfakes.
- How to use AI hiring tools to reduce bias in recruiting – Jawebon October 9, 2020 at 1:57 am
Dozens of software firms have sprung up to sell companies AI recruitment tools, which they promise will cut bias out of their clients’ hiring processes. In promotional materials and press releases, ...
- How to use AI hiring tools to reduce bias in recruitingon October 9, 2020 at 1:00 am
Algorithms can help companies find hidden sources of bias in their recruiting practices—as long as hiring managers deploy them carefully.
- NYU’s crowdsourced questions probe extent of language model biason October 8, 2020 at 8:43 am
A study finds that biases are more wide-ranging and pervasive in popular natural language processing models than originally thought.
- How Racial Bias in Tech Has Developed the “New Jim Code”on October 8, 2020 at 8:12 am
When machine learning and the use of computers are emphasized in artistic research, in reconstructions, or in beauty contests, viewers often take the results to be scientific, objective, and unbiased.
- UK passport photo checker shows bias against dark-skinned womenon October 7, 2020 at 4:47 pm
Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation. One black ...
- Racist Algorithms: How Code Is Written Can Reinforce Systemic Racismon October 6, 2020 at 5:04 am
Meanwhile, as a 17-year-old student who dabbles in computer programming, I’ve been stewing about a newfangled, less-overt threat that also relates to systemic racism. What I did not realize until this ...
- Twitter Announces Changes To Image Cropping Amid Bias Concernon October 2, 2020 at 2:27 pm
In a blog post, the social media giant said it would review how preview images are cropped after some users recently posted examples white faces being selected over Black faces.
- Twitter Denies Racial Bias After Algorithm Crops Black Man From Photo, Admits 'Potential for Harm'on October 2, 2020 at 8:24 am
Twitter has raised the possibility of dropping its image-cropping algorithm, after it repeatedly cropped a Black man's face out of a picture, favoring a white man's face instead.
- How Kodak Gold Contributed to Bias in Dermatologyon September 30, 2020 at 7:23 am
Kodak Gold was one of the first color films made. To assist photographers, the company developed a Shirley card which could be used for color balancing in developing this film. That card, which was ...
via Bing News