This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Menu

Welcome to Connected World

Your go-to source for latest insights from our lawyers. Through sharp analysis and commentary, we explore the pressures facing businesses today.

| 1 minute read

AI child sexual abuse on the rise

The growth of Artificial Intelligence (AI) continues at pace and with it comes, sadly, yet rather predictably, the creation of sexual abuse images and videos.  

The Internet Watch Foundation (IWF) first reported in October 2023 on the presence of over 20,000 AI-generated images on a dark web forum in one month, depicting 3,000 criminal child sexual abuse (CSA) activities. Now, AI generated CSA content is increasingly being found on publicly accessible areas of the internet (the “clear web”).

The IWF reports that many of the images and videos are so realistic that they can be very difficult to tell apart from imagery of real children. Perpetrators are now using images of real victims to generate new images, creating deepfake videos of abuse by manipulating adult pornography, and also creating abusive imagery from less explicit content. As with non-AI CSA images, they are also regarded as criminal content in the eyes of UK law.

According to one Senior Internet Content Analyst at the IWF, the situation with AI-generated content was reaching a “tipping point” where “the potential is there for organisations and the police to be overwhelmed by hundreds and hundreds of new images, where we don’t always know if there is a real child that needs help.” 

The IWF reports that they trace where CSA content is hosted in order to get it removed. Addresses of webpages containing AI-generated CSA are uploaded on to the IWF’s URL list which is shared with the tech industry to block the sites and prevent people from being able to access or see them. The AI images are also tagged as AI on a “Hash List” which can be used by law enforcement in their investigations. 

The police continue to work proactively to pursue offenders, and this is no different for AI generated imagery. However it is vital that the tech companies to do more to make their platforms safer, to include those companies responsible for the developing use of AI.

Tags

uk & europe, abuse and neglect, casualty, disease, cyber, education, employer and public liability, insurance & reinsurance, local authority