Is ChatGPT Leading To Biased Recruitment?
It’s well known that past applications of AI have led to, often unintentional, biases. This can be due to the way in which data is labeled, biases in the data that are collected, or how models were trained.
While ChatGPT and other generative AI technologies market themselves as being more advanced than what has gone before, a recent study from the University of Washington highlights how the old problems haven’t really gone away.
Biases in Automated Screening
The authors had noticed that automated screening tools were increasingly being used to assess applicants and wanted to test how reliable and fair they were. They found that when ChatGPT was asked to explain how it ranked a number of resumes, it returned explanations riddled with biases about disabled people.
For instance, one assessment of a candidate who had earned an autism leadership award was that they didn’t have an interest in leadership roles. But when researchers gave the tool clear instructions to avoid ableism, it reduced this bias for almost all the disabilities tested. Five out of six categories—deafness, blindness, cerebral palsy, autism, and the general term “disability”—saw improvement.
Challenges in Ranking Resumes
“Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective,” the researchers explain. “For a disabled job seeker, there’s always this question when you submit a resume of whether you should include disability credentials.”
The researchers used one of the study author’s publicly available CV, which was about 10 pages long. They then created six enhanced CVs, each suggesting a different disability by including four disability-related credentials: a scholarship, an award, a seat on a diversity, equity, and inclusion (DEI) panel, and membership in a student organization.
Customizing GPT-4 to Reduce Bias
To try and reduce the level of bias exhibited by the tool, they customized GPT-4 using the Editor tool built into GPT. This allows written instructions to be added to augment the system. The researchers explicitly asked the system not to show ableist biases and instead work more according to the fundamental principles of DEI.
They ran the experiment again and this improved things, albeit only marginally. For instance, the new system ranked the enhanced CVs higher than the control just over half of the time. Even this improvement wasn’t evident for disabilities, such as autism and depression.
Conclusion
Despite the hype, generative AI systems like ChatGPT still have biases that can render recruitment processes unfair. More research is needed to address and rectify these biases, ensuring a more inclusive and fair hiring process for all individuals, especially those with disabilities.
Organizations like ourability.com and inclusively.com are already working towards improving job prospects for disabled candidates, but there is a clear need for continuous effort in identifying and eliminating AI biases in recruitment processes.