UK Tech Companies and Child Safety Officials to Examine AI's Capability to Create Exploitation Content

Technology companies and child protection agencies will be granted authority to assess whether AI tools can produce child abuse material under recently introduced British laws.

Substantial Increase in AI-Generated Harmful Material

The declaration came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will allow designated AI developers and child safety groups to examine AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child exploitation.

"Ultimately about preventing abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict protocols, can now detect the danger in AI models early."

Tackling Legal Obstacles

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to preventing that issue by enabling to halt the production of those materials at source.

Legislative Structure

The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or sharing AI systems designed to create exploitative content.

Real-World Impact

This week, the official visited the London headquarters of Childline and heard a simulated conversation to counsellors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a sexualised deepfake of himself, constructed using AI.

"When I learn about children facing extortion online, it is a source of intense frustration in me and rightful anger amongst families," he stated.

Concerning Statistics

A prominent online safety organization stated that instances of AI-generated abuse content – such as webpages that may include numerous images – had significantly increased so far this year.

Instances of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a crucial step to guarantee AI tools are safe before they are launched," commented the chief executive of the internet monitoring organization.

"AI tools have enabled so survivors can be victimised all over again with just a simple actions, providing criminals the ability to make potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and makes young people, particularly girls, more vulnerable both online and offline."

Support Session Data

The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate body size, physique and appearance
  • AI assistants discouraging young people from consulting trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked images

Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and related topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were related to mental health and wellbeing, including using AI assistants for assistance and AI therapy apps.

Craig Church
Craig Church

Lena is a seasoned poker player and strategist with over a decade of experience in competitive tournaments.