UK Considers Ban Of X App Over AI-Generated Sexualised Images
Elon Musk faces mounting pressure in Britain as authorities weigh the potential banning of his X app following claims that its Grok AI chatbot generated sexually explicit images of minors.
The UK government has been urged to consider all options, including multibillion-pound fines or blocking access to the platform.
Prime Minister Demands Immediate Action Against X App
Prime Minister Keir Starmer called the situation “disgraceful” and “disgusting” during a broadcast on Greatest Hits Radio.
He said,
“X has got to get a grip of this, and Ofcom has our full support to take action in relation to this. This is wrong. It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table.”
The call comes after multiple AI-generated images depicting young girls aged 11 to 13 and other non-consensual sexualised content appeared online.
The Internet Watch Foundation (IWF) confirmed that while some images were created using AI tools, including Grok, they appeared on the dark web rather than on X itself.
Ofcom Prepares Investigation Under Online Safety Act
The UK communications regulator, Ofcom, has made urgent contact with X and xAI, the developers of Grok, to determine if a formal investigation is needed under the Online Safety Act (OSA).
The legislation allows authorities to impose significant fines or even block access to platforms failing to remove harmful content.
X app has over 650 million global users, with at least 20 million in the UK, highlighting the scale of potential impact.
Technology Secretary Liz Kendall emphasised the urgency, describing the situation as “absolutely appalling” and urging X to address the issue immediately.
Meanwhile, the Information Commissioner’s Office confirmed it is engaging with X over concerns about the misuse of personal data.
AI Tools And The Risk Of Sexualised Imagery
Alexander Ngaire, Head of Hotline at the Internet Watch Foundation, warned that AI systems like Grok make it alarmingly easy to generate photo-realistic child sexual abuse material (CSAM).
She explained that while most of the material found was Category C, the lowest criminal classification under UK law, one user had used a different AI tool to create a Category A image, the most serious level.
Ngaire added that tools like Grok risked bringing sexual AI imagery of children into mainstream media, highlighting the speed and accessibility with which harmful content can now be produced.
Musk Criticises Online Safety Act As Threat To Free Speech
Elon Musk has repeatedly criticised the UK’s OSA, claiming it risks suppressing free speech while pursuing laudable goals of protecting children from harmful content.
His broader criticism of UK authorities has included controversial claims about the Prime Minister, further straining relations.
Legal Gaps In Regulating AI-Generated Deepfakes
Campaigners and experts have pointed out that current UK law prohibits sharing deepfakes of adults or minors without consent, but legislation targeting the creation of sexualised deepfakes has yet to come into force.
Professor Lorna Woods of Essex University noted that the 2025 Data (Use and Access) Act criminalised generating “purported intimate images,” but key provisions remain inactive.
Andrea Simon from End Violence Against Women warned that delays in enforcement “put women and girls in harm's way.”
She described non-consensual AI deepfakes as a violation of women’s rights with long-lasting impacts, adding that the threat of such abuse forces women to self-censor online.
X App Commits To Removing Illegal Content
X has pledged to remove illegal content and permanently suspend offending accounts.
The platform stated,
“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
This reflects its commitment to cooperating with law enforcement and regulators.
Government Faces Pressure To Close Legal Gaps
Despite passing legislation in June 2025 to criminalise creating non-consensual sexualised deepfakes, the UK government has yet to implement the key provisions.
Baroness Owen, a Conservative peer, criticised delays, saying,
“Survivors of this abuse deserve better. No one should have to live in fear of their consent being violated in this appalling way.”
Cross-bench peer Baroness Beeban Kidron added,
“Technology moves fast, and this legislation is supposed to plug an existing gap, so there is no excuse for delay.”
The unfolding case highlights the challenges regulators face in keeping pace with AI technology while balancing free speech, public safety, and legal enforcement.