Combating the rise of AI-generated child sexual abuse material

Posted on Posted in Children's Rights, Explotation, Human Rights

The world is undergoing a boom in accessible artificial intelligence (AI) technologies. New tools and capabilities present novel opportunities for criminal and malicious actors to exploit children. As lawmakers, policymakers and law enforcement work to keep up with criminal innovations, children are growing increasingly susceptible to AI-generated child sexual abuse material (CSAM). New technologies are enabling criminals to create, amplify and distribute CSAM at an unprecedented rate. Without urgent intervention, this trend threatens to undermine the safety of children online. 

What is AI-generated CSAM?

AI-generated CSAM is synthetic or life-life CSAM created using AI tools (Clark, 2023). This content poses a dual threat: in and of itself it allows for the generation of new and harmful CSAM, and simultaneously AI tools allow criminals to access and disseminate other CSAM at a faster rate than ever before. Illicit content can contain altered images of real children or fraudulent creations. AI tools’ capability to create content that did not previously exist is often termed ‘generative’, and this modality widens the net of potential CSAM victims given the images are fraudulently created. 

AI has presented an unprecedented opportunity for criminals to create an unlimited volume of CSAM. All forms of AI-generated CSAM are illegal, irrespective of whether they depict or reflect a real child or not. AI tools are sophisticated, that most AI-generated CSAM is ‘visually indistinguishable’ from real CSAM, including for trained analysts (Internet Watch Foundation, 2023).

Trends and prevalence

Though the use of AI to generate and propagate CSAM is relatively novel, there are emerging insights about the methodologies being employed by criminal actors. In terms of generation, open access AI software such as Stable Diffusion is being used to generate fraudulent CSAM content (Crawford and Smith, 2023).

The ease of this and other software allows individuals – who do not necessarily have prior technological experience – to create new images based on word prompts and simple instructions (Crawford and Smith, 2023). The text-to-image technology is quick, accurate and allows large volumes of content to be created simultaneously (Internet Watch Foundation, 2023). 

In terms of housing and disseminating CSAM, content-sharing sites such as Patreon are being used to store AI-generated CSAM images (Crawford and Smith, 2023). This is enabling criminals to purchase subscriptions on the same services to access images consistently (Crawford and Smith, 2023). Patreon have expressed their zero tolerance towards this content appearing on their website, however identifying and sanctioning actors who are exploiting these platforms is easier said than done (Crawford and Smith, 2023). 

Global struggles against AI-enhanced online child abuse

This challenge is not limited to Patreon; the United Kingdom’s (UK) National Crime Agency (NCA) believes that more end-to-end encryption – the process of securing data between two communicating parties and preventing third parties from accessing it – within Meta’s platforms (Facebook, Instagram and others) would mean law enforcement would lose access to between 85 and 92% of CSAM referrals coming through the platforms (Home Office, 2023).

Studies in the US suggest Facebook, Instagram and WhatsApp are the social media platforms housing the greatest proportion of illicit material (McQue, 2024). These are quickly followed by other large scale social media entities, reflecting the central role of some of the world’s most popular websites and applications in enabling CSAM dissemination. 

Owing to the ease of production of CSAM facilitated by AI, it is difficult to accurately measure the scale and prevalence of the content online. The UK’s Home Secretary stated that 800 arrests have been made of sexual predators and the government safeguard 1,200 children per month (Home Office, 2023), but this is not limited to the online domain. Further, AI software can be downloaded and run online, this means malicious actors can create limitless content whilst disconnected from the internet, obscuring them from law enforcement (Internet Watch Foundation, 2023). 

The limited analysis that does exist points to an alarming volume of AI-generated CSAM being produced in a short space of time. Analysis in the UK found 20,254 images had been posted to just one AI CSAM forum in the month of September 2023 (Internet Watch Foundation, 2023). 

In the United States of America (USA), the National Center for Missing and Exploited Children (NCMEC) only started receiving AI-generated CSAM in 2023, and have already received reports linked to nearly 5,000 pieces of content (McQue, 2024). In 2023, the USA’s Cyber Tipline received nearly 36 million reports referring to CSAM incidents, over 90% of the content was uploaded on foreign soil and over 60,000 of the reports involved a child in perceived immediate danger (McQue, 2024). 

Law enforcement and policy responses

Given the cross-cutting nature of the AI-generated CSAM challenge, multi-sectoral approaches are required to meaningfully disrupt the trend. In the private sector 27 organisations, including TikTok and Snapchat among others, recently partnered with several governments, including the USA, Germany and Australia to explore new ways to combat AI-generated CSAM (Clark, 2023). 

Bilateral and multinational government interventions are also beginning to surface: in 2023 the USA and the UK issued a joint statement reaffirming their commitment to combatting generative AI (Home Office and US Secretary of Homeland Security, 2023). This was supported by a landmark UK government bill, the Online Safety Bill, which imposed stricter requirements on private sector entities to do all they can to identify and remove CSAM content from their websites (Home Office, 2023). 

Strategic responses to AI-generated CSAM

The growing threat of AI-generated CSAM necessitates a prompt and comprehensive response. Among other interventions, there are a few key things that governments must do to catalyse a robust response.

  • Governments must ensure their CSAM legislation is updated, where necessary, to ensure it adequately encourages the monitoring of AI-generated CSAM and creates pathways for arrests and prosecutions (Internet Watch Foundation, 2023). 
  • Governments must work with overseas and neighbouring jurisdictions to align their processes and regulations on AI-generated CSAM (Internet Watch Foundation, 2023). Inconsistent regulatory frameworks create loopholes for criminal actors to exploit.
  • Governments must monitor and regulate the work of private sector technology developers, who often innovate at a faster rate than policymakers can legislate for new criminal activities. At a minimum, technology companies must be asked to clearly prohibit CSAM on their platforms, and actively seek out ways to prevent the dissemination of harmful content via their platforms. 
  • Law enforcement actors must be trained on the use of AI tools to help them identify and process AI-generated CSAM. Where resource constraints exist, law enforcement bodies should be encouraged and supported to develop novel ways to counter the proliferation of CSAM. 
  • Research bodies should seek out opportunities to enhance the evidence base of AI-generated CSAM. There is currently no standardised understanding of the scale of the threat, nor is their robust analysis of where the content is housed online. This undermines the law enforcement response. 

At Humanium, we are dedicated to safeguarding children’s rights and ensuring their safety in the digital age. By supporting our efforts through donations, volunteering, or advocacy, you can help us protect children across the globe. Join us in our mission to make the digital world a safer place for future generations.

Written by Vanessa Cezarita Cordeiro 

References:

Clark, E. (2023, October 31). “Pedophiles using AI to generate child sexual abuse imagery.” Retrieved from Forbes, accessed 22 April 2024.  

Crawford, A and Smith, T. (2023, June 28). “Illegal trade in AI child sex abuse images exposed.” Retrieved from BBC News, accessed 22 April 2024. 

Home Office and US Secretary of Homeland Security. (2023, September 27). “A joint statement from the United States and the United Kingdom on combatting child sexual abuse and exploitation.” Retrieved from Home Office, accessed 22 April 2024. 

Home Office. (2023, September 27). “UK and US pledge to combat AI-generated images of child abuse.” Retrieved from Home Office, accessed 22 April 2024. 

Internet Watch Foundation. (2023, October 25).  “‘Worst nightmares’ come true as predators are able to make thousands of new AI images of real child victims.” Retrieved from Internet Watch Foundation, accessed 22 April 2024.  

Internet Watch Foundation. (2023, October). “How AI is being abused to create child sexual abuse imagery.” Retrieved from Internet Watch Foundation, accessed 22 April 2024. 

McQue, K. (2024, April 16). “Child sexual abuse content growing online with AI-mad images, report says.” Retrieved from The Guardian, accessed 22 April 2024.