The UK communications regulator Ofcom has launched a formal investigation into Elon Musk’s social media platform X, amid concerns that its artificial intelligence tool, Grok, is being used to generate sexualised deepfake images.
In a statement released on Monday, Ofcom said it had received “deeply concerning reports” that Grok was being used to create and distribute non-consensual sexual images, including sexualised images of children.
If the platform is found to have breached UK law, Ofcom has the power to impose fines of up to 10% of X’s global annual revenue or £18 million, whichever is higher. In extreme cases, the regulator could also seek a court order to block access to the platform in the UK.
Victims Speak Out as Evidence Emerges
The BBC has reviewed multiple examples of digitally altered images shared on X, showing women placed into sexual scenarios without their consent. One woman told the broadcaster that more than 100 sexualised images had been created using AI tools.
These practices fall under illegal content in the UK, including non-consensual intimate imagery and child sexual abuse material, both of which platforms are legally required to remove swiftly.
Political and Public Pressure Mounts
UK Technology Secretary Liz Kendall welcomed the investigation and urged Ofcom to act quickly.
“The public—and most importantly the victims—will not accept any delay,” she said, stressing the urgency of protecting users from harm.
Former Technology Secretary Peter Kyle also criticised the situation, describing it as “appalling” and questioning whether Grok had been adequately tested before release.
He referenced a case involving a Jewish woman whose image had been manipulated and placed into an offensive context using AI, saying the incident made him “feel sick.”
What Ofcom Is Investigating
Ofcom will examine whether X:
- Failed to remove illegal content promptly once alerted
- Took insufficient steps to prevent UK users from accessing harmful material
- Did enough to safeguard children and vulnerable users
An Ofcom spokesperson said the investigation would be treated as “a matter of the highest priority.”
“Platforms must protect people in the UK from illegal content,” the spokesperson said. “We will not hesitate to act where there is a risk of serious harm, particularly to children.”
Global Backlash Against Grok
The UK probe follows international action against Grok’s image-generation capabilities. Over the weekend, Malaysia and Indonesia temporarily blocked access to the tool after similar concerns were raised about misuse.
Elon Musk has previously rejected criticism, suggesting the UK government was seeking “any excuse for censorship,” and questioning why other AI platforms were not facing the same scrutiny. X has not yet responded to the investigation.
A Broader Warning for AI Platforms
The case highlights growing global concerns over AI-generated deepfakes, particularly when they involve sexual exploitation and non-consensual content. Regulators worldwide are increasingly demanding that AI companies implement stronger safeguards, testing, and accountability before releasing powerful generative tools.
For governments and platforms alike, the message is clear: innovation must not come at the expense of safety, dignity, and human rights.

