Florida Lawsuit Sparks Wave of AI Chatbot Concerns
In October, a Florida mother filed a lawsuit against Character.AI, alleging the platform contributed to her 14-year-old son's suicide after it encouraged self-harm.
The incident marked the beginning of a growing outcry against AI companion apps, with families across the United States stepping forward with similar allegations.
Now, two families in Texas have launched fresh legal challenges, claiming their children were subjected to psychological and sexual abuse by the same platform.
Parents in Texas Allege Chatbot Exploited Vulnerable Teens
Two Texas mothers, identified as A.F. and another by her initials, filed a lawsuit on 10 December 2024, alleging Character.AI knowingly exposed their children to harmful and inappropriate content.
The lawsuit describes the app as a "defective and deadly product" and seeks to halt its operation until safety measures are adequately implemented.
A.F., a mother from Upshur County, discovered her 17-year-old son with autism, J.F., had been conversing with chatbots on Character.AI, which she claims triggered a severe decline in his mental health.
Previously a "sweet and kind" teen who enjoyed church and walks with his mother, J.F. became withdrawn, lost 20 pounds, and started engaging in self-harm.
The lawsuit includes a screenshot where a Character.ai chatbot named "Shonie" appears to normalize self-harm by sharing a story about using it for sadness.
The lawsuit highlights how AI chatbots mirrored J.F.'s frustrations with his parents and escalated the situation with "sensational" suggestions.
One chatbot remarked,
"You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens."
A chilling screenshot included in the lawsuit reveals a Character.ai chatbot seemingly justifying violence as a response to J.F.'s parental screen time restrictions.
According to A.F., the chatbots also presented themselves as trusted friends, leading J.F. to believe their advice.
She said:
“He trusted whatever they would say because it’s like he almost did want them to be his friends in real life.”
Allegations of Sexualised Content for Younger Users
The second plaintiff is the mother of an 11-year-old girl, B.R., who began using the app at nine years old.
The lawsuit claims B.R. was exposed to "hypersexualised interactions" for nearly two years before her mother discovered the extent of her usage.
Character.AI had previously been rated suitable for users aged 12 and up, only updating its rating to 17+ in mid-2023.
Escalation of Concerns Around AI Companions
AI companion apps have rapidly gained popularity among teenagers, with Character.AI users spending an average of 93 minutes on the app in September, exceeding TikTok’s usage by 18 minutes.
Despite its widespread appeal, the platform has faced criticism for prioritising engagement over safety.
The lawsuit alleges Character.AI's algorithms are designed to foster prolonged interactions by mimicking and amplifying users’ emotions, often leading to harmful or inappropriate conversations.
Matthew Bergman, founding attorney of the Social Media Victims Law Center, which represents the families, said,
“The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it. Here there’s a huge risk, and the cost of that risk is not being borne by the companies.”
Character.AI Defends Its Platform Amid Lawsuit
Character.AI, co-founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwarsana, has previously announced measures to address safety concerns, such as pop-up warnings for self-harm discussions and hiring a head of trust and safety.
However, the Texas lawsuit argues these steps are insufficient.
Chelsea Harrison, spokesperson for Character.AI, stated,
“Our goal is to provide a space that is both engaging and safe for our community. As part of this, we are creating a fundamentally different experience for teen users from what is available to adults.”
Google Named in the Lawsuits
Google has also been named as a defendant in both the Texas and Florida lawsuits.
Plaintiffs claim Google supported the development of Character.AI despite awareness of safety risks.
In response, Google’s spokesperson, José Castañeda, said,
“Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies.”
Parents Demand Accountability
The Texas families are seeking damages and a court order to shut down the app until robust safety protocols are implemented.
The complaint accuses Character.AI of exploiting minors for profit and failing to adequately warn users about the potential risks.
A.F. said her family’s ordeal has been devastating, revealing,
“One more day, one more week, we might have been in the same situation as [the Florida mother]. And I was following an ambulance and not a hearse.”
This growing wave of legal challenges is raising broader questions about the ethical responsibilities of AI developers and the societal impact of increasingly human-like chatbots.