An internet watchdog is sounding the alarm over the growing trend of sex offenders collaborating online to use open source artificial intelligence to generate child sexual abuse material.
“There’s a technical community within the offender space, particularly dark web forums, where they are discussing this technology,” Dan Sexton, the chief technology officer at the Internet Watch Foundation (IWF), told The Guardian in a report last week. “They are sharing imagery, they’re sharing [AI] models. They’re sharing guides and tips.”
Sexton’s organization has found that offenders are increasingly turning to open source AI models to create illegal child sexual abuse material (CSAM) and distribute it online. Unlike closed AI models such as OpenAI’s Dall-E or Google’s Imagen, open source AI technology can be downloaded and adjusted by users, according to the report. Sexton said the ability to use such technology has spread among offenders, who take to the dark web to create and distribute realistic images.
“The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified. And that is a much harder problem to fix,” Sexton said. “It’s been taught what child sexual abuse material is, and it’s been taught how to create it.”
Sexton said the online discussions that take place on the dark web include images of celebrity children and publicly available images of children. In some cases, images of child abuse victims are used to create brand-new content.
“All of these ideas are concerns, and we have seen discussions about them,” Sexton said.
Christopher Alexander, the chief analytics officer of Pioneer Development Group, told Fox News Digital one of the new dangers of this technology is that it could be used to introduce more people to CSAM. On the other hand, AI could be used to help scan the web for missing people, even using “age progressions and other factors that could help locate trafficked children.”
“So, generative AI is a problem, AI and machine learning is a tool to combat it, even just by doing detection,” Alexander said.
Meanwhile, Jonathan D. Askonas, an assistant professor of politics and a fellow at the Center for the Study of Statesmanship at the Catholic University of America, told Fox News Digital that “lawmakers need to act now to bolster laws against the production, distribution, and possession of AI-based CSAM, and to close loopholes from the previous era.”
IWF, which searches the web for CSAM and helps to coordinate its removal, could find itself overwhelmed by tips to remove such content from the web in the era of AI, Sexton said, noting that the proliferation of such material was already widespread across the web.
“Child sexual abuse online is already, as we believe, a public health epidemic,” Sexton said, according to The Guardian. “So, this is not going to make the problem any better. It’s only going to potentially make it worse.”
Ziven Havens, the policy director at the Bull Moose Project, told Fox News Digital that it will be up to Congress to act in order to protect both children and the internet.
“By using already available images of real abuse victims, AI CSAM varies very little from that of non-AI-created CSAM. It is equally morally corrupt and disgusting. The extreme dangers created by this technology will have massive implications on the well-being of the internet,” Havens said. “Where these companies fail, Congress must aggressively step up to the plate and act to protect both children and the internet as a whole.”