The Metropolitan Police is exploring the use of artificial intelligence to help investigators sort and assess online child sexual abuse material more quickly, in a move the force says could identify children at risk earlier and reduce the psychological burden on officers reviewing distressing evidence.
The plan centers on using AI for the rapid grading and triage of seized images and videos, allowing systems to flag the most serious material and help investigators spot content that may involve previously unidentified victims. The Met said the technology would support, rather than replace, human decision-making by pushing the most urgent material to the front of the queue for specialist review.
The force said it investigated more than 5,400 child sexual abuse offences in the last year and safeguarded more than 1,300 children in relation to online abuse and exploitation alone, underlining why it views online child sexual abuse as one of its fastest-growing crime threats.
Under the model being explored, AI tools would automatically pre-sort abuse imagery into the UK’s existing severity bands, including categories A, B and C, so investigators can focus first on the material most likely to point to immediate danger or unknown victims. The aim is to reduce the time officers spend manually working through huge volumes of files before they can begin victim identification and safeguarding action.
The Met said the technology could help determine whether material relates to a known case or whether it signals a new child who may need urgent protection. In practice, that means AI would be used as a triage layer on top of current processes, accelerating file review rather than making final investigative judgments on its own.
The force is also discussing the approach with several technology companies as it tests how such systems could be integrated into operational work. That effort sits alongside other digital tools already being used by the Met, including one system that can review and risk-assess around 641,000 messages in 35 minutes, showing how automation is already being applied to communications evidence as well as imagery.
The most important argument for the new approach is speed. In child sexual abuse investigations, delays can mean the difference between identifying a child in danger and missing a critical window for intervention. The Met said faster triage could help officers identify and safeguard children sooner, particularly when abuse material points to victims not yet known to police or social services.
A second goal is investigator welfare. Officers and staff working these cases are often exposed to large volumes of deeply traumatic material, and the force has made clear that one reason for exploring AI is to significantly reduce how much of that content humans must view directly during the initial sorting phase. Human investigators would still make the key operational decisions, but automation could absorb more of the first pass through the data.
That reflects a broader change in digital policing: AI is increasingly being framed not only as a productivity tool, but as a protective layer for both victims and investigators.

The Met’s announcement lands amid mounting concern over the wider online child abuse landscape, including a sharp increase in AI-generated child sexual abuse material. The Internet Watch Foundation said it identified 8,029 AI-generated images and videos of realistic child sexual abuse in 2025, a 14% increase from the previous year, and warned that synthetic videos have risen at an especially alarming pace.
The IWF has also warned that AI-generated abuse videos are becoming significantly more realistic, with category A material accounting for a high share of the synthetic content it reviewed. That has intensified pressure on police, platforms and regulators to develop tools that can both detect abusive material and help distinguish between synthetic and real-victim cases so urgent safeguarding is not delayed.
That wider context helps explain why law enforcement agencies are turning to automated systems. The challenge is no longer just the existence of abuse material online. It is the scale, speed and complexity of identifying what demands immediate attention.
The Met’s plans fit into a broader trend across policing, child protection and regulation. In recent months, European policymakers have moved toward tighter rules on AI tools used to generate child sexual abuse material, while UNICEF has called for countries to criminalize the creation of AI-generated child sexual abuse content.
At the same time, child safety groups and technology companies are clashing over how aggressively automated scanning tools should be used, particularly in privacy-sensitive contexts. That debate has sharpened in Europe after a legal lapse affecting some detection practices, with safety advocates warning that weaker scanning powers could reduce abuse reporting.
Against that backdrop, the Met’s position is more targeted and operational. Its focus is not on broad public surveillance, but on using AI after evidence has already been seized, to help specialists move faster through data and find children in danger more quickly.
The significance of the Met’s move lies less in the novelty of AI and more in where it is being applied. Online child sexual abuse cases generate vast amounts of digital evidence, and the traditional model of manual review is slow, psychologically punishing and increasingly hard to scale. AI triage offers a way to move investigators faster toward the files that matter most.
If the technology works as intended, the benefit will not simply be administrative efficiency. It will be earlier safeguarding, faster victim identification and less exposure to traumatic material for the officers handling some of the most distressing cases in policing.
That is why this is not just another story about police adopting new software. It is a test of whether AI can be used in one of the most sensitive corners of criminal investigation to shorten the distance between digital evidence and real-world child protection.
Be the first to post comment!
Google and Intel have announced a multiyear expansion of the...
by Vivek Gupta | 3 days ago
Most people do not choose between Airtable and Notion correc...
by Vivek Gupta | 4 days ago
OpenAI has introduced a new child safety blueprint aimed at...
by Vivek Gupta | 4 days ago
A silent launch with loud implicationsGoogle has quietly rel...
by Vivek Gupta | 6 days ago
A former Meta executive who helped navigate one of Facebook’...
by Vivek Gupta | 1 week ago
Cognichip, a semiconductor AI startup, has secured $60 milli...
by Vivek Gupta | 1 week ago