The Internet Watch Foundation (IWF), a UK charity which helps detect and remove child sexual abuse imagery online, has published its latest annual report which shows that in 2023:
- The IWF assessed 392,665 reports and confirmed 275,652 webpages contained images or videos of children suffering sexual abuse.
- 254,071 (92%) of the imagery discovered shows “self-generated” child sexual abuse, where children are being targeted and groomed or coerced into engaging in sexual activity which is then recorded via webcams and devices with cameras.
- 2023 was the “most extreme” year on record, with more Category A child sexual abuse imagery discovered than ever before (nearly one in seven images – a 22% increase).
- Children aged 3-6 years old are being coerced into penetrating themselves, acts of bestiality, and sadism or degradation.
- A record number of companies now take services from IWF to stop child sexual abuse images circulating online.
With the number of reports containing child sexual abuse increasing once again, and with evidence showing that children aged 3 to 6 years are being targeted online, IWF is calling for the development and use of tools to detect previously unseen child sexual abuse imagery.
The IWF report follows recent warnings from the communications watchdog Ofcom that almost a quarter of children aged 5 to 7 have a smartphone of their own.
Responding to the report, Security Minister Tom Tugendhat urged tech firms to do more to prevent abuse. He also called on parents "to speak to your children about their use of social media, because the platforms you presume safe may pose a risk”.
Ofcom's research suggests a third of parents whose 5-7 year-olds browse social media are allowed to do so alone.
Ian Critchley, child protection lead for the National Police Chiefs' Council, said protecting young children was not just the responsibility of parents and carers. "The biggest change" needed to come from the tech companies and online platforms, he said. As part of its work implementing the new Online Safety Act, Ofcom has said it will consult on how automated tools, including AI, can be used to "proactively detect" illegal content - including child sexual abuse material.
But the IWF is calling for swift action and argues technology firms should not wait. Susie Hargreaves, IWF’s chief executive said, "The harms are happening to children now, and our response must be immediate."
AI is already used by some big tech firms to help identify content that violates its terms, including child abuse material which is then reviewed by human moderators. But experts warn AI alone is not a cure-all. Professor Alan Woodward, Computer Security Expert at the University of Surrey commented, "AI may prove useful in helping with the scale of the data being analysed but at its current state of development it shouldn’t be considered a complete solution."
Furthermore, issues in tackling child sexual abuse arising from AI are inherently international in nature, and so action to address them requires international cooperation.