GeekWire recently highlighted a study by Co-Director Aylin Caliskan showing critical bias in resume screening technology.
“Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval” is discussed as part of GeekWire’s “Bot or Not?” series exploring the relationship between humans and machines.
Caliskan worked with Information School Ph.D. student Kyra Wilson on the research that tested three open-source, large language models (LLMs). The results demonstrated gender and race bias as well as intersectional bias when gender and race are combined.
Read the full GeekWire article here.