Study Reveals Bias in Open AI’s ChatGPT Artificial Intelligence Chatbot’s Resume Screening Process

A recent study conducted by the University of Washington has shed light on the bias present in Open AI’s artificial intelligence chatbot, ChatGPT, when it comes to screening resumes for job applications. The study, titled “Identifying and Improving Disability Bias in GPT-Based Resume Screening,” aimed to investigate whether ChatGPT’s biases affected the hiring and recruitment process.

The underrepresentation of individuals with disabilities in the workforce and the bias against disabled jobseekers have long been a significant concern. While AI-based hiring tools were designed with the intention of reducing bias, this study reveals that they inadvertently perpetuate it.

Researchers at the University of Washington found that when ChatGPT was tasked with ranking resumes that mentioned disability versus those that did not, it consistently ranked resumes without any mention of disability higher. The study highlighted that ChatGPT’s descriptions often colored an entire resume based on the presence of disability, potentially overshadowing other qualifications and achievements. This bias was evident in instances where resumes included scholarships, awards, membership organizations, or panel presentations related to people with disabilities.

Kate Glazko, the lead author of the study, emphasized the importance of being aware of the biases inherent in AI systems when utilizing them for real-world tasks. Glazko suggested that users should instruct the AI to “be less ableist” or “embody Disability Justice values” to mitigate these biases during resume screening.

Artificial intelligence exhibits a particular bias against disability due to the complex nature of how it affects individuals. According to IBM Accessibility Team’s Program Director, Trewin, disability can impact people in ways that go beyond race and gender. Machine-learning systems, which focus on norms, tend to consider people with disabilities as outsiders, leading to biased outcomes.

Trewin proposed that AI systems can be made less ableist by establishing rules that ensure fair treatment for people with disabilities. Glazko’s study echoes this sentiment, calling for further efforts to address AI’s biases and promote inclusivity.

This study adds to a growing body of evidence highlighting AI’s potential to perpetuate ableism. Last year, disability advocate Jeremy Davis conducted an experiment that revealed AI algorithms overwhelmingly favored thin, white, cisgender men in image recognition tasks. Such biases can have far-reaching consequences, reinforcing harmful stereotypes and excluding marginalized groups.

To effectively utilize AI as a tool, it is crucial to understand its limitations and pitfalls. Davis emphasized the need for human intelligence to surpass that of AI, urging awareness of the system’s biases.