Back

February 27, 2026

If chatbots are therapists, what are obligations? Co-Director asks

A recent New York Times article considers the questions surrounding use of AI that endangers public safety, including instances where individuals built bombs or planned large-scale attacks.

“When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?” brings forward issues of privacy vs. safety, including questions around when to report to law enforcement suspicious activity or engagements with AI tools. In the article, the founder of OpenAI notes that personal conversations with therapists are protected by confidentiality privileges, and that AI tools may be covered in the same way.

Not necessarily in cases of pending harm, says Co-Director Ryan Calo.

“If you are a therapist and you know someone will get hurt, you have an obligation to warn them,” Calo said in the article. “One wonders whether that would be appropriate here to the extent that someone is substituting chat for a therapist.”

Read the full article at this link: When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?.