Home / News / UW News: ChatGPT Biased Against Disability-Related Resumes — But Can Improve

“GAI [Generative Artificial Intelligence] tools can be useful and help people meet currently unaddressed access needs, but we need to acknowledge that risks such as bias exist, and be proactive as a field in finding accessible ways to validate GAI outputs.”

 

Kate Glazko, doctoral student in the Paul G. Allen School of Computer Science & Engineering

Image: Screenshot of Chat GPT welcome message with welcome screen in the background. (Photo: Ascannio - stock.adobe.com)Via UW News: University of Washington graduate student Kate Glazko, who studies generative AI and biases, observed that recruiters used AI tools like ChatGPT to summarize and rank resumes, raising concerns about how these systems might rank resumes indicating disabilities. Glazko and her colleagues conducted a study that found that ChatGPT consistently ranked resumes with disability-related honors lower than identical resumes without such credentials. When asked to explain the rankings, ChatGPT displayed biased perceptions, reinforcing stereotypes against disabled individuals.

The study involved using a publicly available CV and enhancing it with disability-related credentials to imply six different disabilities. ChatGPT’s GPT-4 model ranked these enhanced CVs against the original for a job listing, showing biased results in favor of the original resume. Explicit and implicit ableism was evident in the AI’s explanations, with references to DEI involvement and personal challenges detracting from perceived suitability for technical roles.

The researchers attempted to mitigate these biases by customizing the AI with instructions not to be ableist. This approach reduced bias for five out of six tested disabilities, with some resumes ranking higher than those without disability mentions. However, improvements were inconsistent, and only three out of the five improved disabilities ranked higher than their non-disability counterparts. Their findings were presented at the 2024 ACM Conference on Fairness, Accountability, and Transparency. The study underscores the importance of recognizing AI biases and calls for more research to document and address these biases, ensuring technology is fair and equitable for all users.

Read the full article, “ChatGPT is biased against resumes with credentials that imply a disability — but it can improve,” on UW News.

Photo: Elizabeth Nance, Ph.D., IHDD Researcher and Dept. of Bioengineering faculty.Image: Autism Intervention Research Network on Physical Health - AIR-P logo thumbnail.