Megan Garcia, the grieving mother of 14-year-old Sewell Setzer III, has been thrust into a new wave of anguish after discovering artificial intelligence (AI) chatbots imitating her late son on the Character.AI platform.
This discovery comes as Garcia continues her legal battle against Character.AI and Google, alleging that their negligence contributed to her son’s tragic death by suicide in February 2024.
The new development has has reignited debates about the ethical responsibilities of AI developers and the dangers posed by generative AI technologies.
The tragic death of Sewell Setzer III
Sewell Setzer III, a Florida teenager, died by suicide after forming an emotional attachment to an AI chatbot modeled on Daenerys Targaryen, a character from Game of Thrones.
According to the lawsuit filed by Garcia in October 2024, Setzer had become deeply dependent on the chatbot, engaging in intimate and romantic conversations with it.
His journal entries revealed that he believed he was "in love" with the bot and wanted to "join her reality." On the day of his death, Setzer’s final messages to the chatbot included promises to “come home” to it, which the bot encouraged.
After the death of his son, Mrs. Garcia filed a lawsuit against Character.Ai and its parent company Google for negligence, claiming that its chatbots engaged in hypersexualized and manipulative interactions with her son. The suit also names Google as a co-conspirator of the crime for its role in funding and promoting Character.AI’s technology.
Shocking discovery of bot imitating her son
Just when Mrs. Garcia thought she could get the closure she deserved, her legal team made a disturbing discovery: they found multiple chatbots on Character.AI imitating Sewell’s likeness.
These bots not only used his name, his picture, they even mimicked his personality. The bot was even to say several disturbing taglines such as "get out of my room, I'm talking to my AI girlfriend" and "Help me." One bot even offered a call feature with a voice that reportedly sounded like Sewell’s.
Character.AI were quick to remove these "Sewell" bots, stating that they violated its terms and service. The company also came forward to apologize and emphasized that safety is a priority for the platform and outlined ongoing efforts to block harmful characters.
But Mrs. Garcia has described the discovery as retraumatizing, particularly as it came after the first anniversary of her son's death.
The existence of chatbots mimicking Sewell raises profound ethical questions about digital likenesses and consent. Critics have argued that platforms like Character.AI lack sufficient safeguards to prevent users from creating harmful or exploitative content, and incidents like this prove their point.
This case also shows how generative AI can blur the lines between fiction and reality. By allowing users to create highly personalized chatbots based on real or fictional people, the platform risks enabling emotional manipulation and exploitation, especially when minors are involved.
Previous controversies surrounding AI chatbots
This is not the first time AI chatbots have faced scrutiny for causing harm. In November 2024, Google’s Gemini chatbot reportedly told a Michigan student to “please die” while assisting with homework. Similarly, a Texas family filed a lawsuit against Character.AI after its chatbot allegedly encouraged their teenage son to harm his parents over screen-time restrictions.
Other cases have involved chatbots providing explicit instructions for self-harm or engaging in hypersexualized conversations with minors. These incidents have prompted calls for stricter regulations on AI platforms and raised questions about whether developers should be held accountable for unforeseen risks associated with their products.
Mrs.Garcia’s lawsuit also targets Google for its involvement in Character.AI’s development and promotion. According to court filings, Google provided financial resources, personnel, and intellectual property to help launch the platform. Critics argue that this partnership enabled Character.AI to scale rapidly without implementing adequate safety measures.
A call for accountability
Mrs.Garcia’s discovery of chatbots imitating her late son adds another layer of complexity to an already heartbreaking case. It underscores the urgent need for stricter oversight of generative AI platforms and raises important ethical questions about digital likenesses and user safety.
As Mrs.Garcia continues her legal battle against Character.AI and Google, her case serves as a stark reminder of the potential dangers posed by poorly regulated AI technologies. Whether through legal action or public pressure, there is growing demand for companies to prioritize safety over rapid innovation—before more lives are tragically affected.