Co-Director Aylin Caliskan has been named a recipient of the 2025 Science of Trustworthy AI award from Schmidt Sciences to further explore large language models.
Caliskan’s project, “Towards Understanding Motivated Reasoning in LLMs,” will investigate when, why, and how large language models (LLMs) exhibit human-like motivated reasoning. The goal is to make LLMs’ reasoning more accurate, transparent, fair, and aligned with safety objectives across consequential domains and diverse user contexts.
The research will be funded by a $300,000 grant from the Schmidt Sciences program, which honors “visionaries shaping the future of science and technology.”
Read Caliskan’s full project overview here.