British Technology Firms and Child Protection Agencies to Examine AI's Ability to Create Exploitation Images
Tech firms and child protection agencies will receive authority to assess whether AI tools can produce child exploitation images under new UK laws.
Substantial Rise in AI-Generated Harmful Content
The announcement came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the authorities will allow approved AI companies and child safety groups to examine AI models – the foundational technology for chatbots and image generators – and ensure they have sufficient safeguards to prevent them from creating depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the danger in AI models early."
Tackling Regulatory Challenges
The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that problem by helping to halt the production of those materials at their origin.
Legal Framework
The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or sharing AI systems designed to create exploitative content.
Real-World Consequences
This week, the official toured the London base of Childline and heard a simulated conversation to advisors involving a account of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about children experiencing extortion online, it is a source of extreme anger in me and rightful anger amongst families," he stated.
Alarming Data
A leading online safety foundation reported that cases of AI-generated exploitation content – such as online pages that may include numerous images – had significantly increased so far this year.
Cases of the most severe material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of illegal AI images in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to ensure AI tools are safe before they are launched," commented the head of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a few clicks, providing criminals the ability to make potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally exploits survivors' suffering, and makes children, particularly girls, less safe both online and offline."
Support Interaction Data
Childline also released details of counselling interactions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
- Using AI to rate body size, body and looks
- AI assistants discouraging young people from talking to safe guardians about harm
- Facing harassment online with AI-generated material
- Online extortion using AI-manipulated pictures
During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related terms were discussed, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using chatbots for support and AI therapeutic applications.