IBM centered TPL research on AI bias in hiring in a recent article on its company news feed.
The article “How AI can help the recruitment process—without hurting it” highlights the need for critical thinking when using AI tools in the hiring process. Frameworks and legislation are growing as recruitment increasingly relies on AI, which the TPL research shows can be discriminatory.
The paper, “Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval,” was led by TPL Co-Director Aylin Caliskan and UW Information School Ph.D. student Kyra Wilson.
“The models we used were not fine-tuned on any domain-specific datasets, so we observed that overall societal biases favoring white and male people also started occurring in positions which are not typically associated with these groups,” Wilson said of her research. “Using these models at scale then could have the potential to change societal patterns of employment in negative ways.”
Looking forward, Wilson recommends factors beyond just name and gender are studied to determine how they signal identities.
“Learning more about how these factors can signal intersecting identities and whether that plays a role in AI evaluation is an important next step for researchers and model developers,” Wilson said.