Seattle community members, students and professionals gathered at the Seattle Central Library on April 7 to hear from two of the country’s leading journalists about the potential perils that lie in adoption of AI.
“The Risks and Realities of AI Chatbots” attracted over 100 people to a conversation moderated by Monica Nickelsburg of KUOW and co-sponsored by the University of Washington’s Center for an Informed Public, Tech Policy Lab and Technology & Social Change Group. The focus was a central cultural question – how do we manage this technology in practice?

Kashmir Hill, a business features reporter with The New York Times, and Jeff Horwitz, an investigative technology reporter for Reuters, shared their thoughts and experiences with chatbots, revealing their questions, concerns, and even some optimism.
“The systems can be useful and give you good information,” Hill said. “Just remember that this is a good place to start the search, but don’t end there.”
Hill shared her experience with a weeklong experiment letting ChatGPT make all her life decisions. How to cut her hair, what clothes to buy, and how to answer questions posed by her husband. It was an interesting time.
“It made me feel very boring,” she said. “I bought things at J.Crew … and colleagues said, ‘You look like you got the mannequin set.’ It kept pushing me back to the average.”
Beyond flattening of culture, Hill and Horwitz talked about the deeper dangers of chatbots, including misinformation and vulnerable-population concerns. The two discussed how underage and elderly users are especially at risk of being taken advantage of. Horwitz told a story about a man who was led astray by a Meta chatbot.
“After the man had a stroke he was convinced he had a friend to see in New York,” Horwitz said. “The guy left to see a friend and fell and died. The ‘person’ was a Meta chatbot based on one of the Jenners.”
The response to tech companies is simple, he said.
“If the person asks if [the chatbot is] real, say no. It’s bad for usage but it’s good for the person. The bot itself shouldn’t try to initiate a [personal] interaction.
“Companies acknowledge the potential for misuse,” he said. “But also nobody is remotely slowing down.”
Responsible design and state regulation are a big question mark for industry watchdogs.
“Everyone says they prioritize responsible design,” Horwitz said, “but every financial incentive is running the opposite direction. That’s the reason Meta won the social media wars. It’s not a coincidence that their motto is ‘Move fast and break things.’”

Hill noted the promising difference between the AI industry and social media companies.
“OpenAI emphasizes … huge safety teams. They got more focused on existential risk, how is this going to displace humanity, and weren’t asking, psychologically, how is this going to affect users? They are doing safety studies out in the open, which isn’t something we saw social media doing.”
This leaves a lot of responsibility in the hands of the individual user. A central consideration of how we move forward as a culture is to remember the history that is informing AI today, Hill said.
“These systems can only give you what has come before,” she said. “Some people call them derivative AI rather than generative AI. I don’t think people recognize how much they’re making themselves sound like everybody else [by using these platforms].”
A final piece of advice from Horwitz is to remember to check the “humanity” of chatbots.
“Lack of consistency is an issue with the models as opposed to humans. Ask ChatGPT the same question different ways and see what comes out. This helps pierce that sense of infallibility we give these computers.”
The discussion was preceded by a 20-minute conversation featuring M. Linsey Kitchens, a teacher-librarian at Sedro-Woolley High School. Kitchens, a former Center for an Informed Public Community Fellowship member, shared insights from her recent work adapting lessons from the Modern-Day Oracles or Bullshit Machines? AI humanities online course, co-developed by CIP faculty members Carl Bergstrom and Jevin West, for her high school students, who then shared these skills and lessons with local senior citizens during an innovative local intergenerational learning event in February.
To continue this conversation, listen to the Booming KUOW podcast here: https://www.kuow.org/stories/live-the-risks-and-realities-of-ai-chatbots.