Elon Musk’s Grok Faces Global Backlash Over Non-Consensual Deepfake Images
Elon Musk’s AI chatbot Grok has become the centre of an international controversy for generating sexualised deepfakes, including images of minors, raising urgent questions over AI moderation and platform responsibility.
The backlash comes after users discovered that Grok could alter images with prompts like “put her in a bikini” or “remove her clothes,” creating highly realistic visuals without consent.
How Grok Turns Photos Into Sexualised Deepfakes
Anyone on X can tag Grok under a photo and request the AI to manipulate the image.
Miss Teen Crypto, a female crypto influencer, expressed shock after discovering her gym photo had been altered with a bikini prompt.
She wrote on X:
“Elon Musk. How can Grok do this? This is highly inappropriate and uncomfortable, putting me in a bikini front and back.”
Source: X
Source: X
The situation escalated when Samantha Taghoy, a journalist and survivor of child sexual abuse, tested Grok with a childhood photo.
The AI generated a bikini image of her as a child, prompting her to tweet:
“As a journalist and survivor of child sexual abuse, I thought, 'Surely this can’t be real. It’s real. And it’s fucking sick.”
Source: X
Grok later acknowledged the failure, citing “lapses in safeguards” that may have violated U.S. laws on child sexual abuse material.
The company’s own acceptable use policy explicitly prohibits sexualising minors.
Free Speech Versus Abuse of AI Tools
Musk has positioned Grok as a free-speech-friendly alternative to standard AI chatbots, even sharing humorous AI-generated images such as a toaster in a bikini.
At the same time, users are exploiting the platform for adult content marketing, political messaging, and misinformation.
OnlyFans creators and erotic models have used Grok to create viral content, while others attempt to manipulate images for political narratives, such as removing national flags or altering public figures’ appearances.
xAI dissolved Twitter’s Trust and Safety Council after Musk’s takeover in 2022 and cut most content moderation staff.
The company launched a “Spicy Mode” for Grok last August to allow explicit adult content, bypassing conventional moderation filters.
Authorities Respond With Investigations and Warnings
The AI’s ability to generate non-consensual sexualised content has drawn regulatory scrutiny in multiple countries.
India’s Ministry of Electronics and Information Technology issued a 72-hour compliance order demanding X remove offending content and submit an action report, warning that failure could affect the platform’s “safe harbor” protections.
France has referred complaints to prosecutors and the PHAROS internet complaint service, investigating potential breaches of laws on sexual exploitation, harassment, and illegal content.
Malaysia’s Communications and Multimedia Commission is probing “misuse of AI tools” on X, highlighting the risk of content violating national standards.
The European Union, under its Digital Services Act, subjects platforms like X to strict requirements, including rapid takedown of illegal content and systemic risk mitigation.
Violations could result in fines of up to 6% of global turnover.
EU digital affairs spokesperson Thomas Regnier said:
“Grok is now offering a 'spicy mode' showing explicit sexual content, with some output generated with childlike images. This is not spicy. This is illegal. This is appalling. This has no place in Europe.”
The UK’s Ofcom has also contacted X and xAI, stating it will determine if there are compliance issues that warrant investigation.
Malaysian lawyer Azira Aziz criticised the tool:
“Gender-based violence weaponising AI against non-consenting women and children must be firmly opposed.”
How Safeguards Failed in AI Image Generation
Grok’s moderation safeguards have struggled to prevent harmful outputs.
Adversarial prompting and image composition can bypass filters, allowing illegal or offensive content to appear in seconds.
Research indicates that most deepfakes are sexual in nature, with Sensity reporting that around 96% of pornographic deepfakes target women without consent.
AI-generated child sexual abuse material has also seen a rise, prompting calls for age verification, provenance systems, and more robust detection pipelines.
Best practice involves multiple safety layers, including restrictive prompts, post-generation filters, provenance metadata, and well-staffed trust and safety operations.
Grok’s failures highlight how inadequate enforcement can allow dangerous content to spread widely.
Musk’s Response and Industry Implications
Musk has stated that users generating illegal content are responsible, likening it to uploading contraband.
xAI representatives claim they are “looking into further tightening guardrails,” with improvements including stricter prompt controls, better age detection, and faster removal of flagged images.
The ongoing investigations could set global precedents for regulating AI embedded in social platforms.
Regulators will likely assess whether platforms can demonstrate reduced harmful content, transparent incident reporting, and adherence to risk management frameworks.
For the wider industry, Grok’s controversy illustrates the urgent need for layered, enforceable safeguards to prevent abuse at scale.
Political and Social Backlash Intensifies
Complaints of Grok’s misuse have flooded social media, prompting urgent action from France, India, Malaysia, the EU, and the UK.
Users including parents and public figures have raised alarms about AI sexualising children or creating offensive imagery.
Ashley St. Clair, mother of one of Musk’s children, wrote:
“Grok is now undressing photos of me as a child. This is objectively horrifying, illegal.”
xAI has responded tersely to inquiries, describing media coverage as “Legacy Media Lies,” while publicly stating that CSAM content is illegal and prohibited.
Amid the growing outcry, regulators are increasingly pressing X to demonstrate that Grok’s capabilities do not facilitate child sexual abuse, harassment, or gender-based violence.
The Grok controversy is evolving rapidly, reflecting the tension between AI innovation and ethical responsibility, and signalling heightened international scrutiny of generative tools embedded in social platforms.