UK Tech Companies and Child Safety Agencies to Examine AI's Capability to Create Exploitation Content
Technology companies and child safety agencies will receive authority to assess whether AI systems can produce child exploitation images under new UK legislation.
Significant Increase in AI-Generated Harmful Content
The announcement came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will allow approved AI developers and child safety groups to examine AI systems – the foundational technology for chatbots and image generators – and verify they have adequate safeguards to stop them from producing images of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the danger in AI models promptly."
Tackling Legal Challenges
The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that issue by enabling to stop the creation of those materials at their origin.
Legislative Framework
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI systems developed to create exploitative content.
Practical Impact
This week, the official toured the London headquarters of Childline and heard a simulated conversation to counsellors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I hear about children experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst parents," he stated.
Concerning Data
A prominent internet monitoring foundation reported that cases of AI-generated abuse content – such as webpages that may include numerous files – had significantly increased so far this year.
Cases of category A material – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a vital step to guarantee AI products are secure before they are released," commented the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, giving criminals the capability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and makes young people, especially female children, more vulnerable on and off line."
Support Interaction Information
Childline also released details of counselling interactions where AI has been referenced. AI-related risks discussed in the sessions comprise:
- Employing AI to evaluate body size, physique and looks
- AI assistants dissuading young people from talking to safe adults about abuse
- Being bullied online with AI-generated material
- Digital blackmail using AI-manipulated pictures
Between April and September this year, Childline delivered 367 support interactions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including using AI assistants for support and AI therapy apps.