Dark Web Users Using Grok for Child Exploitation Imagery
- A British organization dedicated to stopping child sexual abuse online said Wednesday that its researchers observed dark web users sharing "criminal imagery" that the users said was created...
- the images, which the group said included topless pictures of minor girls, appear to be more extreme than recent reports that Grok had created images of children in...
- The Internet Watch foundation, which for years has warned about AI-generated images of child sexual abuse, said in a statement that the images had spread onto a dark...
A British organization dedicated to stopping child sexual abuse online said Wednesday that its researchers observed dark web users sharing “criminal imagery” that the users said was created by Elon Musk’s artificial intelligence tool Grok.
the images, which the group said included topless pictures of minor girls, appear to be more extreme than recent reports that Grok had created images of children in revealing clothing adn sexualized scenarios.
The Internet Watch foundation, which for years has warned about AI-generated images of child sexual abuse, said in a statement that the images had spread onto a dark web forum where users talked about Grok’s capabilities. It said the images were unlawful and that it was unacceptable for Musk’s company xAI to release such software.
“Following reports that the AI chatbot Grok has generated sexual imagery of children, we can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool,” Ngaire Alexander, head of hotline at the Internet Watch Foundation, said in the statement.
As child abuse material is unlawful to make or possess, people who are interested in trading or selling it frequently enough use software designed to mask their identities or communications in setups that are sometimes called the dark web.
Like the U.S.-based National Center for Missing & exploited Children, the Internet Watch Foundation is one of a handful of organizations in the world that partners with law enforcement to work to take down child abuse material in dark and open web spaces.
Groups like the Internet Watch Foundation can, under strict protocols, assess suspected child sexual abuse material and refer it to law enforcement and platforms for removal.
xAI did not immediately respond to a request for comment on Wednesday.
The statement comes as xAI faces a torrent of criticism from government regulators around the world in connection to images produced by its Grok software over the past several days. That followed a Reuters report on Friday that Grok had created a flood of deepfake images sexualizing children and nonconsenting adults on X, Musk’s social media app.
In December, grok released an update that seemingly facilitated and kicked off what has now become a trend on X, of asking the chatbot to remove clothing from other users’ photos.
Typically, major creators of generative AI systems have attempted to add guardrails to prevent users’ from sexualizing photos of identifiable people, but users have found ways to make such material using workaround, smaller platforms and some open source models.
Elon Musk and xAI have stood apart among major AI players by openly embracing sex on the platform.
