The media has created stories of AI ending humanity but that’s not close to our current reality, Faculty Associate Emily Bender says.
In a recent article in National Geographic, Bender argues that current AI tools share little with the thinking, feeling AIs of science fiction.
“What companies mean when they say AI is ‘venture capitalists, please give us money,’ ” Bender said. “It does not refer to a coherent set of technologies.” She notes that tech companies lean into our perception of what it means to be intelligent is to make their products seem more human-like.
While there is some discourse around the potential harm to humanity that AI poses, Bender is more concerned with the immediate risks of the technology, such as privacy concerns, the environmental impacts of data centers, and chatbots’ impact on mental health.
Algorithms “combust[ing] into consciousnesses” and deciding to kill us all “is not the problem I’m worried about,” she says.