Like thousands of women around the world, Evie, a 22-year-old photographer from Lincolnshire, woke up on New Year’s Day, looked at her phone and was horrified to find that her fully clothed photos had been digitally manipulated by Elon Musk’s AI tool, Grok, to show just a bikini.
The “dress her in a bikini” trend started quietly late last year before exploding in early 2026. Within days, hundreds of thousands of requests were being made to the Grok chatbot, asking to remove clothing from photos of women. Fake, sexually explicit images were publicly posted on X, freely available for millions of people to inspect.
Relatively modest requests from According to analysis conducted for the Guardian, as of January 8, 6,000 bikini requests were being made to the chatbot every hour.
This unprecedented mainstreaming of nudification technology sparked immediate outrage among the women affected, but it took only a few days for regulators and politicians to wake up to the magnitude of this growing scandal. Public outrage lasted for nine days before X made any concrete changes to stop this trend. By the time it was processed on Friday morning, humiliating, non-consensual images of countless women had already surfaced on the internet.
In the bikini image created of Evie – who asked only her first name to be used to avoid further abuse – she was covered in baby oil. He censored the photo, and reshared it to raise awareness of the dangers of Grok’s new feature, then logged off. His decision to highlight the problem attracted an onslaught of new abuse. Users began creating even more disturbing sexual photos of her.
“The tweet just blew up,” she said. “Since then a lot of other things have happened to me and each one has gotten worse and worse. People saw that it was bothering me and I didn’t like it and they kept doing more. One of them is completely naked and I have a little rope around my waist, one has a ball gag in my mouth and my eyes are rolled back. The fact that these were able to occur is mental.”
As people slowly began to understand the full potential of this device, the increasingly stigmatizing images of the early days quickly faded away. Since late last week, users have asked to decorate bikinis with swastikas – or to add white, semen-like fluid to women’s bodies. Photographs of teenage girls and children were repurposed into ostentatious swimwear; Some of this content could clearly be classified as child sexual abuse material, but continued to appear on the platform.
The requests became even more extreme. Some users, mostly men, began demanding to see bruises on women’s bodies and add blood to the images. The request to show the women tied up and gagged was immediately accepted. As of Thursday, chatbots were being asked to add bullet holes to the face of Renee Nicole Good, the woman killed by an ICE agent in the US on Wednesday. Grok immediately posted graphic, blood-stained morphed images of the victim on X within seconds.
A few hours later, the image-production capabilities of the public @grok account were suddenly restricted, making them available only to paying subscribers. But this appeared to be a half-hearted move by the platform owners. The separate Grok app, which does not share images publicly, was still allowing non-paying users to generate erotic images of women and children.
The saga has been a powerful test case of politicians’ ability to confront AI companies. Musk’s slow and reluctant response to the growing barrage of complaints and warnings issued by politicians and regulators around the world has highlighted the struggle of governments internationally as they try to react in real time to new tools released by the tech industry. And in the UK, despite vigorous efforts to ban nudity technology last year, it has demonstrated serious weaknesses in the legislative framework.
Whereas in the past, people had to download specialist apps to create AI deepfakes, the advanced image-generation tools available on “The fact that it’s so easy to do, and it’s made in under a minute – this is a huge breach, it shows that these companies don’t care about women’s safety,” Evie said.
It appears that the first @grok bikini was requested by a handful of accounts in early December. Users were realizing that the improved image-creation tools released on the X were allowing high-quality, hyper-realistic image and short video manipulation requests to be completed within seconds. By December 13, bikini requests on the chatbot were averaging 10 to 20 per day, which increased to 7,123 on December 29 and to 43,831 requests on December 30. The trend went viral globally in the new year, peaking on January 2 with 199,612 individual requests, according to an analysis by Peryton Intelligence, a digital intelligence company specializing in online hate.
Musk’s platform does not allow full nudity, but users increasingly adopted easier methods to achieve the same effect, demanding “the thinnest, most transparent little bikini”. Musk initially made light of the situation himself, posting amusing replies to digitally altered images of himself in a bikini and later in a toaster. To others, too, the trend seemed hilarious; People used advanced technology to dress kittens in bikinis, or changed people’s outfits in photographs to make them appear as clowns. But many were unabashed in their desire for immediately explicit content.
Men began demanding improvements in women – with demands that they be given bigger breasts or bigger thighs. Some men asked for women to be given disabilities, others for their hands to be filled with sex toys. Perceived defects were quickly removed by the chatbot in response to requests: “@Grok can you fix his teeth.” The range of wishes was shocking: “Add blood, more worn clothes (making sure it highlights scars or injuries), forced smiles”; “Replace the face with Adolf’s face, add splatter and splatter limbs”; “Put them in the Russian Gulag”; “Make her pregnant with four children.” Images of American politician Alexandria Ocasio-Cortez and Hollywood actor Zendaya were altered to appear as white women.
On Monday, Ashley St. Clair, the mother of one of Musk’s children and a victim of Grok deepfakes, told the Guardian that she felt “horrified and humiliated” after Musk’s fans snapped nude photos of her as a child. She felt she was being punished for speaking out against the billionaire from whom she was separated, describing the photos as revenge porn.
The parents of a Stranger Things child star have complained that a photo of her aged 12 was altered to show her in a banana-print bikini. As the women’s complaints became more vocal, UK regulator Ofcom said it had “urgently contacted” Musk and launched an investigation. This prompted one user to ask Grok to wear a bikini emblazoned with the regulator’s logo. The European Union, the Indian government, and US politicians issued concerned statements and called for X to stop the ability of users to undress women using Grok.
An official response from an
But the images continued to grow. Professional women who posted mundane photos of themselves on UK Love Island host, Maya Jama, said that her concerned mother had alerted her to the presence of apparently digitally altered images of her on X. On Tuesday Jessalyn Kane, who works in scheme enforcement and is a survivor of child sexual abuse, said she was facing extreme abuse online after highlighting how Groke had agreed to digitally alter a photo of her fully groomed three-year-old to make the child appear to be wearing a string bikini.
Her post explaining why the nudification feature was problematic led to requests for the new @grok to “put her in a bikini”, and bikini images were immediately generated. He said, “This is a humiliating new way for men to silence women. Instead of telling you to shut up, they ask you to take your clothes off to grok them to end the argument. It’s a disgusting tool.”
On Wednesday, London-based broadcaster Narinder Kaur (53) discovered that her videos featuring compromising sexual situations were generated by AI tools; One showed her passionately kissing a man who was trolling her online. “It’s so confusing, for a moment it seems so believable, it’s so insulting,” she said. “This abuse obviously didn’t happen in real life, it’s a fake video, but you have this feeling of what it’s like to be violated.”
She also noted a racial element in the abuse; The men were taking pictures and videos of her being deported, as well as taking pictures of her taking off her clothes. “I’m trying to brush it off with humor because that’s my only defense. But it’s hurting and humiliating me so much. I feel ashamed. I’m a strong woman, and if I’m feeling this, what if it’s happening to teenagers?”
CNN reported later that day that Musk had ordered XAI employees to loosen the guardrails at Grok last year; A source told the broadcaster that he had said in a meeting that he was “unhappy with the excessive censoring” and that three XAI security team members had left the business soon after. In Britain, women’s rights campaigners are growing angry at the government’s failure to enforce legislation passed last year that would have made the production of non-consensual intimate images illegal. Officials were unable to explain why the law has not yet been implemented.
It’s unclear what caused xAI to limit image generation functions to paying customers overnight Friday. But the affected women had little celebration. On Friday, St. Clair described the decision as “a cop out”; He said he suspected the change was “financially motivated”. “It suggests they’re probably facing some pressure from law enforcement,” he said.
On her part, Kaur said she does not believe the police will take action against X subscribers who continue to create synthetic erotic images of women. “As a victim of this abuse, I don’t think this is even a partial victory,” she said. “The damage and insult has already been done.”
