British Tech Companies and Child Safety Agencies to Test AI's Capability to Create Exploitation Content
Tech firms and child protection organizations will be granted authority to assess whether AI tools can produce child abuse images under new UK legislation.
Substantial Rise in AI-Generated Harmful Material
The announcement coincided with findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the authorities will allow approved AI developers and child protection groups to examine AI models – the underlying systems for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from producing images of child sexual abuse.
"Ultimately about stopping abuse before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now detect the danger in AI systems promptly."
Tackling Regulatory Obstacles
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that issue by helping to halt the production of those images at source.
Legal Framework
The amendments are being added by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or distributing AI systems developed to generate exploitative content.
Practical Consequences
This recently, the official toured the London base of Childline and listened to a mock-up conversation to advisors involving a account of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.
"When I hear about young people experiencing extortion online, it is a source of intense frustration in me and rightful concern amongst families," he stated.
Alarming Data
A leading internet monitoring foundation stated that instances of AI-generated exploitation content – such as webpages that may include numerous files – had more than doubled so far this year.
Cases of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, making up 94% of prohibited AI depictions in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a vital step to guarantee AI products are safe before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create potentially limitless quantities of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and makes children, particularly girls, more vulnerable both online and offline."
Counseling Interaction Data
The children's helpline also released details of support sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Employing AI to rate body size, body and appearance
- AI assistants discouraging children from consulting trusted guardians about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-faked images
During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapy applications.