In a devastating development, a lawsuit has been filed against Character AI, its founders Noam Shazeer and Daniel De Freitas, and Google following the suicide of a 14-year-old boy, Sewell Setzer III. The lawsuit, filed by the teen’s mother, Megan Garcia, alleges wrongful death, negligence, deceptive trade practices, and product liability, accusing the AI chatbot platform of being “unreasonably dangerous” and failing to provide safety measures for its young users.
According to the lawsuit, Setzer had been using Character AI for months, engaging with chatbots modeled after fictional characters from Game of Thrones, including Daenerys Targaryen. On February 28, 2024, the teen tragically took his own life “seconds” after his last interaction with the bot. The case involves questions about the safety protocol for vulnerable users like children on the platform.
Is Character AI’s response too little or too late?
This isn’t the first time Character AI has been involved in controversy. Recently, Character AI has again come to the fore with a death issue. At 18, Jennifer Ann Crecente’s chatbot was designed within this platform without permission. Realizing such a thing exactly 18 years after her death deeply affected Jennifer’s father and uncle, and they complained about Character AI.
Setzer’s mother says her family’s distress is now the anguish that Setzer contemplates facing his mother, whose platform chatbots — used mostly by teens — lack the guardrails to protect them from harm. Character AI’s founders, Shazeer and De Freitas, have been open about their ambitions to push the boundaries of AI technology. In an interview referenced in the lawsuit, Shazeer expressed frustration with Google’s corporate hesitation, claiming that “brand risk” prevented the launch of their Meena LLM model. The lawsuit says the arrangement suggests safety was sacrificed to speed AI development.
Already, the platform has come under fire for its anthropomorphized chatbots that enable users to chat with fictional and real-life personas (therapists and celebrities). This personalization is fun for many people, but it is dangerous because it can blur the line between entertainment and reality and does so, especially for impressionable teenagers.
Following the tragedy, Character AI made several announcements regarding updates to its safety protocols. The platform, says Chelsea Harrison, the company’s communications head, is embracing more stringent ways to prevent minors from seeing any bad material. This includes filters and more actions when something sensitive is found on it. User sessions of more than an hour are now alerted to warnings, and a revised disclaimer now tells users that the AI bots are not real people.
These updates have not gone far enough, according to many. Critics say a company could and should have put the changes in place against which Garcia sued long before there was any incidence. Garcia’s lawsuit argues that the company should have prevented the harm that ultimately resulted in her son’s death. Regulatory and legal frameworks are behind the times trying to catch up to rapidly developing AI technology, and the emotional burden of victims is once again falling on their families.
The need for accountability in AI development
We get the echoes of the Jennifer Ann Crecente case here. Like her family had to go to court to have her look taken off the platform, Setzer’s mother now fights to make Character AI accountable. Both cases raise an urgent question: When will enough be enough? As Garcia’s lawyers noted, platforms like Character AI provide “psychotherapy without a license,” further complicating such technology’s ethical and legal implications.
Considering Character AI’s track record of recurring incidents, it’s clear that you can’t ignore a pattern of negligence. Whether it’s the unauthorized use of real people’s likenesses or the tragic loss of a young life, the platform’s operations reveal a broader issue: As fast as AI is moving to market, how regulations can follow is being outpaced. The way companies like Character. AI builds and maintains ethical boundaries, which falls to them.
This is not just about one tragedy. This lawsuit is set to continue, as it should, so this is not helpful to us. This is about pulling back from the talking points and celebrating the need for companies to put human safety ahead of technological acceleration. The consequences of getting it wrong for platforms like Character AI are all too real, and these cases highlight how they can be deadly.
Image credit: Furkan Demirkaya/Ideogram