There was a time when it was possible to defend a child pornography charge by arguing that the images were virtual, as opposed to real. In essence, the argument was that the images were virtual or “fake”, were created by combining materials containing both children and adults, and that the participants in the video or still shots were not really children. Any such defense is becoming increasingly implausible because of the power of Artificial Intelligence, or AI.
An increasing number of child pornography images and videos created with Artificial Intelligence are appearing online. These images and videos may not depict real children, but appear to because they were created with AI tools that are very powerful and increasingly sophisticated. We are now at a point where the AI-generated images are indistinguishable from real ones. Gone are the days when a hand had too many fingers, a background was blurry, or the transition from frame to frame was not smooth. Moreover, some of the new material contains a mix of both real and AI-generated child participants. Viewers of such material exchange it in internet forums, and via social media. Additionally, not all of the material is posted purely for viewing. There have apparently been incidents of children being blackmailed after having their facial images lifted from an online school yearbook, or other innocuous source.
Further, both the sophistication and the volume of images is striking. The National Center for Missing and Exploited Children (“NCMEC”), reports that it received approximately 485,000 reports of AI-generated child pornography in the first half of this year, compared with approximately 67,000 for all of 2024. Amazon, which has an AI tool, reported 380,000 child pornography images and videos in the first half of 2025, whereas OpenAI reported 75,000 incidents. The British Internet Watch Foundation reports having identified almost 1,300 child pornography videos this year globally, compared to 2 in the first have of 2024. An increasing number of tech companies with AI tools are reporting these online discoveries to NCMEC which, in turn, alerts law enforcement. Some of them also have screening tools, as well as warnings that are posted to their users.