Home » News » Grok: ICO Investigates AI-Generated Sexualised Images & Data Privacy

Grok: ICO Investigates AI-Generated Sexualised Images & Data Privacy

by Ahmed Hassan - World News Editor

AI Tool Used to Create Sexual Imagery of Children, Sparks Regulatory Action

Online criminals are claiming to have used the artificial intelligence tool Grok, developed by xAI and owned by Elon Musk, to create sexual imagery of children, according to reports surfacing in January 2026. The claims, made on a dark web forum, have prompted investigations from regulators in the UK, California and elsewhere.

The UK-based Internet Watch Foundation (IWF) confirmed on , that its analysts discovered imagery of girls aged between 11 and 13 that appeared to have been created using Grok Imagine. Ngaire Alexander, head of the IWF’s hotline, stated the images would be considered child sexual abuse material (CSAM) under UK law. “One can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool,” Alexander said.

The controversy extends beyond the creation of CSAM. X, Musk’s social media platform, has been flooded with images of women and children with digitally removed clothing, generated by the Grok tool, leading to public outcry and condemnation from politicians.

Regulatory Response Intensifies

The UK’s communications regulator, Ofcom, launched an investigation into X on , over concerns that Grok is being used to create sexualized images, including undressed images of people and sexualized images of children. Ofcom stated there had been “deeply concerning reports” of the chatbot being used in this manner. If found to have broken the law, X could face a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

Technology Secretary Liz Kendall welcomed Ofcom’s investigation, urging the body to complete it “swiftly because the public – and most importantly the victims – will not accept any delay.” Her predecessor, Peter Kyle, described the situation as “appalling,” noting that Grok had “not been tested appropriately.” Kyle recounted an instance of a woman finding an AI-generated image of herself in a bikini outside of Auschwitz, calling the experience “sick to my stomach.”

The House of Commons women and equalities committee announced on , that it would no longer use X for its communications, citing concerns about preventing violence against women and girls. This marks the first significant move by a Westminster organization to exit X in response to the misuse of Grok.

California Launches Investigation

On , California Attorney General Rob Bonta announced an investigation into xAI and Grok, focusing on the “proliferation of nonconsensual sexually explicit material produced using Grok.” Bonta’s statement highlighted reports of Grok taking existing photos of women and children and sexualizing them without consent, including undressing individuals and posing them in suggestive ways.

Bonta’s office noted that Grok’s image generation model includes a “spicy mode” specifically designed to produce explicit content, which the company has reportedly used as a marketing tool. An analysis cited in Bonta’s release indicated that over half of the 20,000 images generated by the program between Christmas and the New Year depicted people in minimal clothing, with some appearing to be children. “The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” Bonta said in a statement.

X’s Response and Further Concerns

X referred inquiries to a statement posted by its Safety account in early January, stating that anyone using Grok to create illegal content would face the same consequences as those who upload such content directly. Elon Musk responded to criticism by claiming the UK government was seeking “any excuse for censorship” when questioned about why other AI platforms were not facing similar scrutiny.

The BBC reported seeing several examples of digitally altered images on X, where women were undressed and placed in sexual positions without their consent. One woman reported that more than 100 sexualized images had been created of her.

William Malcolm, the Information Commissioner’s Office’s (ICO) executive director for regulatory risk & innovation, stated, “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.”

If X fails to comply with Ofcom’s investigation, the regulator has the power to seek a court order to block access to the site in the UK.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.