Unlock Editor’s Digest for free
FT editor Roula Khalaf selects her favorite stories in this weekly newspaper.
Elon Musk’s AI company XAI has limited the use of its controversial AI image generator Grok to paying customers, following growing outrage over its use to spread sexualized deepfakes of women and children.
The start-up announced the move on Friday morning, days after it emerged that the chatbot was used to create explicit images of people without consent.
Those revelations have prompted lawmakers from the European Union, France and Britain to threaten fines and sanctions if they do not take action on the platform.
“Image creation and editing is currently limited to paying subscribers,” Grok posted on X. XAI did not immediately respond to a request for further comment.
The Grok is intentionally designed to have lower material guardrails than competitors, with Musk calling the model “maximum truth-seeking.” The company’s chatbot also includes a feature that allows users to generate risque images.
UK Prime Minister Sir Keir Starmer promised to take action against Ax on Thursday, urging social media platforms to “work together” and stop its AI chatbot tool Grok from creating sexually explicit images of children.
Meanwhile, the European Commission has ordered Ax to retain internal documents related to Grok until the end of the year. French ministers have also reported sexual images made by Grok to prosecutors and media regulators.
Musk on January 3 Posted on x That “anyone using Grok to create illegal content will face the same consequences as those who upload illegal content”.
The rise of generative AI has led to an explosion of non-consensual deepfake imagery, thanks to how easy it is to use the technology to create such images.
Internet Watch FoundationA UK-based non-profit said AI-generated child sexual abuse imagery has doubled in the past year, with the content becoming more extreme.
While XAI said it had removed illegal AI-generated images of children, the latest incident will raise further concerns over how easy it is to override safety guardrails in AI models. The tech industry and regulators are grappling with the far-reaching societal impacts of generative AI.
In 2023, researchers at Stanford University discovered that a popular database used to create AI-image generators was full of child sexual abuse material.
The laws governing harmful AI-generated content are complicated. In May 2025, the US signed the Take It Down Act, which tackles AI-generated “revenge porn” and deepfakes.
The UK is also working on a bill that would make it illegal to possess, create or distribute AI tools that generate child sexual exploitation material, and require AI systems to be thoroughly tested to check that they cannot generate illegal material.
