Home / News /
“GAI [Generative Artificial Intelligence] tools can be useful and help people meet currently unaddressed access needs, but we need to acknowledge that risks such as bias exist, and be proactive as a field in finding accessible ways to validate GAI outputs.”
Kate Glazko, doctoral student in the Paul G. Allen School of Computer Science & Engineering
Via UW News: University of Washington graduate student Kate Glazko, who studies generative AI and biases, observed that recruiters used AI tools like ChatGPT to summarize and rank resumes, raising concerns about how these systems might rank resumes indicating disabilities. Glazko and her colleagues conducted a study that found that ChatGPT consistently ranked resumes with disability-related honors lower than identical resumes without such credentials. When asked to explain the rankings, ChatGPT displayed biased perceptions, reinforcing stereotypes against disabled individuals.
The study involved using a publicly available CV and enhancing it with disability-related credentials to imply six different disabilities. ChatGPT’s GPT-4 model ranked these enhanced CVs against the original for a job listing, showing biased results in favor of the original resume. Explicit and implicit ableism was evident in the AI’s explanations, with references to DEI involvement and personal challenges detracting from perceived suitability for technical roles.
The researchers attempted to mitigate these biases by customizing the AI with instructions not to be ableist. This approach reduced bias for five out of six tested disabilities, with some resumes ranking higher than those without disability mentions. However, improvements were inconsistent, and only three out of the five improved disabilities ranked higher than their non-disability counterparts. Their findings were presented at the 2024 ACM Conference on Fairness, Accountability, and Transparency. The study underscores the importance of recognizing AI biases and calls for more research to document and address these biases, ensuring technology is fair and equitable for all users.
Read the full article, “ChatGPT is biased against resumes with credentials that imply a disability — but it can improve,” on UW News.
Related articles:
- For additional in-depth analysis about the study: Identifying and Improving Disability Bias in GPT-Based Resume Screening on arXiv.org
- An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility, August 23, 2023
- “They only care to show us the wheelchair”: Disability Representation in Text-to-Image AI Models
- The Whole U: Learn and Engage this Disability Pride Month – UW IHDD