UK seeks to curb AI child sex abuse imagery with tougher testing

Liv McMahonTechnology reporter

Getty Images A man sits in front of a computer in the dark, with his silhouette illuminated by the light of the screen.Getty Images

The UK government will allow tech firms and child safety charities to proactively test artificial intelligence (AI) tools to make sure they cannot create child sexual abuse imagery.

An amendment to the Crime and Policing Bill announced on Wednesday would enable “authorised testers” to assess models for their ability to generate illegal child sexual abuse material (CSAM) prior to their release.

Technology secretary Liz Kendall said the measures would “ensure AI systems can be made safe at the source” – though some campaigners argue more still needs to be done.

It comes as the Internet Watch Foundation (IWF) said the number of AI-related CSAM reports had doubled over the past year.

The charity, one of only a few in the world licensed to actively search for child abuse content online, said it had removed 426 pieces of reported material between January and October 2025.

This was up from 199 over the same period in 2024, it said.

Its chief executive Kerry Smith  welcomed the government’s proposals, saying they would build on its longstanding efforts to combat online CSAM.

“AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,” she said.

“Today’s announcement could be a vital step to make sure AI products are safe before they are released.”

Rani Govender, policy manager for child safety online at children’s charity, the NSPCC, welcomed the measures for encouraging firms to have more accountability and scrutiny over their models and child safety.

“But to make a real difference for children, this cannot be optional,” she said.

“Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design.”

‘Ensuring child safety’

The government said its proposed changes to the law would also equip AI developers and charities to make sure AI models have adequate safeguards around extreme pornography and non-consensual intimate images.

Child safety experts and organisations have frequently warned AI tools developed, in part, using huge volumes of wide-ranging online content are being used to create highly realistic abuse imagery of children or non-consenting adults.

Some, including the IWF and child safety charity Thorn, have said these risk jeopardising efforts to police such material by making it difficult to identify whether such content is real or AI-generated.

Researchers have suggested there is growing demand for these images online, particularly on the dark web, and that some are being created by children.

Earlier this year, the Home Office said the UK would be the first country in the world to make it illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison.

Ms Kendall said on Wednesday that “by empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought”.

“We will not allow technological advancement to outpace our ability to keep children safe,” she said.

Safeguarding minister Jess Phillips said the measures would also “mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result”.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”
Comments

Leave a Reply

Skip to toolbar