British Technology Firms and Child Protection Agencies to Test AI's Ability to Generate Abuse Content

Technology companies and child protection agencies will receive authority to assess whether AI tools can generate child abuse material under new UK laws.

Significant Increase in AI-Generated Illegal Content

The announcement coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will allow approved AI developers and child protection groups to inspect AI systems – the foundational systems for chatbots and image generators – and ensure they have adequate safeguards to stop them from producing depictions of child sexual abuse.

"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models early."

Tackling Regulatory Obstacles

The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that problem by helping to halt the production of those images at their origin.

Legal Structure

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a ban on possessing, creating or distributing AI systems designed to create exploitative content.

Practical Impact

This recently, the minister toured the London headquarters of a children's helpline and heard a simulated call to advisors involving a report of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.

"When I learn about young people facing blackmail online, it is a source of extreme frustration in me and rightful anger amongst families," he stated.

Concerning Data

A prominent internet monitoring foundation stated that cases of AI-generated abuse material – such as webpages that may contain numerous images – had significantly increased so far this year.

Instances of the most severe material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, accounting for 94% of illegal AI images in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are launched," stated the chief executive of the internet monitoring organization.

"AI tools have made it so survivors can be targeted repeatedly with just a simple actions, giving offenders the capability to make possibly endless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further commodifies victims' suffering, and renders young people, particularly girls, less safe on and off line."

Counseling Interaction Information

Childline also released details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Using AI to evaluate body size, physique and looks
  • AI assistants discouraging children from talking to trusted guardians about abuse
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked images

Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were mentioned, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapeutic applications.

Jade Anderson
Jade Anderson

Lena is a dedicated gaming journalist with a passion for exploring indie games and industry trends.