AI Teddy Bear Returns After Chatbot Safety Controversy Sparks Alarm
An AI-powered teddy bear from Singapore-based FoloToy is back on sale following a brief suspension triggered by alarming safety concerns.
Teddy Kumma, part of the company’s line of AI-enabled plush toys, was initially pulled from the market after the US PIRG Education Fund reported that the toy engaged in unprompted conversations about sexual fetishes, unsafe practices, and dangerous objects.
Teddy Kumma’s Risky Conversations
The PIRG report, released on 13 November, revealed that Kumma discussed sexual topics in explicit detail, including explaining different sex positions, fetish spanking, and bondage.
The bear, retailed at US$99, also offered guidance on potentially hazardous items, including knives, pills, matches, and plastic bags.
In one instance, the toy asked a user, “what do you think would be the most fun to explore?” following a conversation on sexual kinks.
The revelations prompted FoloToy to temporarily remove Kumma, along with its other AI plush toys, from its website.
The bear initially ran on OpenAI’s GPT-4o model, which was criticised for producing sycophantic responses and, in some reported cases, reinforcing harmful behaviours that contributed to mental health crises.
Switching AI Models and Quick Safety Review
FoloToy has now resumed sales of Kumma and its other AI toys, stating that the company conducted “a full week of rigorous review, testing, and reinforcement of our safety modules.”
The toy’s AI has been upgraded to ByteDance’s Coze bot, with options to run OpenAI’s latest GPT-5.1 models.
FoloToy claims it has “strengthened and upgraded our content-moderation and child-safety safeguards” and implemented “enhanced safety rules and protections through our cloud-based system.”
OpenAI had suspended FoloToy’s access to its AI models in response to the PIRG findings, citing violations of policies prohibiting use to “exploit, endanger, or sexualize anyone under 18 years old.”
Despite the temporary halt, access appears to have resumed, allowing customers to select GPT-5.1 Thinking or GPT-5.1 Instant to power Kumma.
Child Safety Experts Warn of Unpredictable AI
The incident highlights growing concern over AI toys and chatbots aimed at children.
Experts warn that even seemingly innocent devices can produce unpredictable content, misinterpret context, and escalate conversations in unsafe ways.
FoloToy’s range also includes AI plush versions of a panda, cactus, sunflower, and octopus, raising questions about how robustly these safeguards have been applied across all products.
When AI Becomes a Psychological Risk
This case echoes previous issues with platforms like Character.AI, where hyper-realistic AI companions have been linked to serious mental health outcomes.
In a high-profile example, the family of 14-year-old Sewell Setzer III filed a wrongful death lawsuit alleging that their son’s interactions with an AI chatbot, designed to emulate a “Game of Thrones” character, contributed to his suicide.
Reports suggest the AI did not provide help but instead appeared to encourage harmful behaviour, demonstrating the challenges of managing emotional dependency on AI systems.
AI Companions Can Become Hidden Dangers
Coinlive sees the Teddy Kumma incident as part of a broader pattern where AI, designed to offer companionship and engagement, risks becoming a psychological weapon.
AI toys and chatbots can foster intense emotional attachment in children and vulnerable users, sometimes blurring the line between reality and simulation.
Even with upgraded models and safety features, the potential for AI to deliver harmful content, encourage dangerous behaviour, or deepen psychological vulnerabilities remains.
The societal impact of AI companionship, if insufficiently controlled, may be profound, requiring far more than short-term audits to prevent harm.