- News
- UK
- Crime
Internet Watch Foundation found the majority of the AI-generated videos fell into Category A, the most extreme classification
Margaret Davis & Aine FoxFriday 16 January 2026 00:12 GMT- Bookmark
Bookmark popover
Removed from bookmarks
Close popover
CloseElon Musk’s X restricts Grok photo editing amid concerns about sexualised images
For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emails
Sign up to our free breaking news emails
Sign up to our free breaking news emails
Email*SIGN UPI would like to be emailed about offers, events and updates from The Independent. Read our Privacy notice
Campaigners have issued a stark warning after artificial intelligence was used to create thousands of child sexual abuse videos last year, contributing to record levels of such harrowing material found online.
The Internet Watch Foundation (IWF) revealed its analysts discovered 3,440 AI-generated videos depicting child sexual abuse in 2025, a dramatic increase from just 13 identified in 2024.
Overall, IWF staff processed 312,030 confirmed reports of abuse images found across the internet in 2025, up from 291,730 the previous year.
Their research indicated that of the 3,440 AI-generated videos, 2,230 fell into Category A, the most extreme classification under UK law, with another 1,020 deemed the second most severe.
Kerry Smith, IWF chief executive, said: “When images and videos of children suffering sexual abuse are distributed online, it makes everyone, especially those children, less safe.
“Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.
“The frightening rise in extreme category A videos of AI-generated child sexual abuse shows the kind of things criminals want. And it is dangerous.
“Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation and further endanger children both on and offline.
open image in galleryX announced limits on its AI chatbot Grok’s ability to manipulate images after over reports of users being able to instruct it to sexualise images of women and children (PA Wire)“Now governments around the world must ensure AI companies embed safety by design principles from the very beginning. It is unacceptable that technology is released which allows criminals to create this content.”
The research comes as X announced limits on its AI chatbot Grok’s ability to manipulate images following an outcry over reports of users being able to instruct it to sexualise images of women and children.
The company said earlier this week that it would prevent Grok “editing images of people in revealing clothes” and block users from generating similar images of real people in countries where it is illegal.
Technology Secretary Liz Kendall said she still expects the regulator Ofcom to “fully and robustly” establish the facts, and while the watchdog welcomed the new restrictions, said its investigation will continue as it seeks “answers into what went wrong and what’s being done to fix it”.
The IWF has previously said it wants all nudifying software banned, argues AI companies need to make tools safer before they are made available and has insisted Government should make this mandatory.
Children’s charity the NSPCC said the IWF’s findings were “both deeply alarming and sadly predictable”.
Its chief executive, Chris Sherwood, said: “Offenders are using these tools to create extreme material at a scale we’ve never faced before, with children paying the price.
“Tech companies cannot keep releasing AI products without building in vital protections. They know the risks and they know the harms that can be caused. It is up to them to ensure their products can never be used to create indecent images of children.
open image in galleryChildren’s charity the NSPCC said the IWF’s findings were “both deeply alarming and sadly predictable” (Punsayaporn Thaveekul/Alamy/PA)“The UK Government and Ofcom must now step in and ensure tech companies are held to account.
“We are calling on Ofcom to use every tool available to them through the Online Safety Act and for Government to introduce a statutory duty of care to ensure generative AI services are required to build children’s safety into the design of their products and prevent these horrific crimes.”
Ms Kendall branded it “utterly abhorrent that AI is being used to target women and girls”, and insisted the Government “will not tolerate this technology being weaponised to cause harm, which is why I have accelerated our action to bring into force a ban on the creation of non-consensual AI-generated intimate images”.
She added: “AI should be a force for progress, not abuse, and we are determined to support its responsible use to drive growth, improve lives and deliver real benefits, while taking action where it is misused.
“That is also why we have introduced a world-leading offence targeting AI models trained or adapted to generate child sexual abuse material. Possessing, supplying or modifying these models will soon be a crime.”
The Lucy Faithfull Foundation, that works to support offenders to stop viewing images of child abuse, said it has also seen the number of people using AI to view and make abuse images double in the last year.
Young people who are worried that indecent images of them have been shared online can use the free Report Remove tool at childline.org.uk/remove
Minister for Safeguarding Jess Phillips said: “This surge in AI-generated child abuse videos is horrifying – this Government will not sit back and let predators generate this repulsive content.”
She added: “There can be no more excuses from technology companies. Take action now or we will force you to.”