Children’s Commissioner Calls for Ban on AI Apps Creating Sexual Images
AI technology that manipulates photos to create sexually explicit images of children is growing increasingly accessible, sparking concern in the UK.
These "nudification" apps, which alter real photos to make individuals appear naked, are now widely available on social media and search platforms.
The apps have become so prevalent that children, particularly girls, are now adjusting their online habits in fear of becoming targets.
Urgent Action Needed to Protect Children from AI Manipulation
Dame Rachel de Souza, the UK’s Children’s Commissioner, has raised alarm about the risks posed by these technologies.
In her recent report, she highlighted how these AI tools are disproportionately targeting young women and girls, with many of the apps designed to specifically alter female bodies.
Dame Rachel shared,
“Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone — a stranger, a classmate, or even a friend — could use a smartphone to manipulate them by creating a naked image.”
The widespread availability of these tools on platforms like app stores and search engines has prompted significant concern about the safety and well-being of children.
According to Dame Rachel, the technology's rapid evolution is overwhelming, with no clear solution in sight to control the potential harm it causes.
She stressed,
"We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives."
Is the UK Government Doing Enough to Combat AI-Generated Sexual Abuse Material?
Despite ongoing legal efforts to tackle the creation and sharing of AI-generated child sexual abuse material (CSAM), critics argue the measures don't go far enough.
Under current laws, it is illegal to share or threaten to share explicit deepfake images, and there are criminal offences related to creating or distributing AI tools designed for this purpose.
However, Dame Rachel believes that a complete ban on nudification apps is necessary.
She has called for legal obligations on developers of generative AI tools to identify and mitigate risks to children and urged stronger systems for removing explicit content from the internet.
According to a government spokesperson, the UK has already taken steps to address the issue.
They pointed to the Online Safety Act, which requires platforms to remove CSAM or face hefty fines.
In addition, new laws were introduced earlier in 2025 to criminalise the possession, creation, and distribution of AI tools that can generate sexually explicit material involving children.
The Growing Impact of AI-Generated Abuse
The rise of AI tools that generate deepfakes has already led to alarming consequences.
Data from the Internet Watch Foundation (IWF) revealed a staggering 380% increase in AI-generated child sexual abuse reports, with 245 cases recorded in 2024 compared to just 51 in 2023.
Derek Ray-Hill, interim Chief Executive of IWF, noted,
"We know these apps are being abused in schools, and that imagery quickly gets out of control."
One 16-year-old girl, who participated in a survey by the Children’s Commissioner, shared her fear of AI manipulation, saying,
"Even before any controversy came out, I could already tell what it was going to be used for, and it was not going to be good things. I could already tell it was gonna be a technological wonder that's going to be abused."
The Government's Role in Combatting AI Abuse
The government has promised further action, with new laws to criminalise AI-generated CSAM and hold platforms accountable for hosting harmful content.
However, Dame Rachel de Souza insists that stronger measures are needed.
She called for the government to recognise deepfake sexual abuse as a form of violence against women and girls and to address the risks posed by AI tools more effectively.
The UK is not alone in facing this challenge.
Other countries like South Korea are also grappling with the rapid development of AI technologies and the dangers they pose to vulnerable populations.
However, the UK’s proactive stance, including the recent introduction of specific offences related to AI-generated abuse, positions it as a leader in addressing the problem.
As AI continues to evolve, ensuring the safety of children online remains a growing concern.
Despite some legal headway, the Children’s Commissioner’s push for a complete ban on nudification apps highlights the ongoing need for stronger safeguards for vulnerable individuals against these damaging technologies.