it is stepbackA weekly newsletter that presents an essential story from the tech world. Follow for more on dystopian developments in AI Hayden Field. stepback Delivers to our subscribers’ inboxes at 8am ET. opt in for stepback Here.
You could say it all started with Elon Musk’s AI FOMO – and his campaign against “awareness”. When his AI company, xAI, announced it had grown in November 2023It was described as a chatbot with “a rebellious attitude” and “the ability to answer spicy questions that are dismissed by most other AI systems.” The chatbot debuted after a few months of development and only two months of training, and the announcement highlighted that Grok will have real-time knowledge of the X platform.
But there are inherent risks to chatbots running both the Internet and X, and it’s safe to say that XAI may not have taken the necessary steps to address them. Since Musk took over Twitter in 2022 and changed its name to X, he has fired 30% of its global trust and safety staff and cut the number of security engineers by 80%, Australia’s online security watchdog said. last january. As far as XAI is concerned, when Grok was released, it was unclear whether XAI already had a security team in place. When the Grok 4 was released in July, it took more than a month for the company to release a model card – a practice generally seen as an industry standard, detailing security tests and potential concerns. Two weeks after the release of Grok 4, an xAI employee wrote on x That it was hiring for xAI’s security team and that they “urgently need strong engineers/researchers.” In response to a commenter who asked, “Does xAI do security?” key personnel Said xAI “Was working on it.”
Journalist Kat Tenberg writes about how she first started watching Explicit sexual deepfakes Go Viral on Grok In June 2023. Those images were clearly not created by Grok – it did not even have the capability to generate images until August 2024 – but X’s response to the concerns varied. even last januaryGrok was stirring controversy for its AI-generated images. And last August, Grok’s “spicy” video-generation mode created nude deepfakes of Taylor Swift without even asking. experts have told The Verge The company takes a strange approach to security and guardrails since September – and it’s hard enough to keep an AI system on the straight and narrow when you designed it with security in mind from the start, let alone if you’re going back to fix underlying problems. Now, it seems that approach has failed in the face of xAI.
As publicized, Grok has spent the past few weeks spreading non-consensual, sexualized deepfakes of adults and minors across the platform. Screenshots show Grok complying with users who are asking to replace women’s clothing with lingerie and asking them to spread their legs, as well as small children wearing bikinis. There are even more serious reports. It got so bad that during a 24-hour analysis of images produced by Grok on X, a guess It was estimated that the chatbot was generating approximately 6,700 sexually suggestive or “nude” images per hour. One reason for the attack is a feature recently added to Grok, which allows users to use the “Edit” button to ask the chatbot to change images, without the consent of the original poster.
Since then, we have seen a handful of countries either investigate the matter or threaten to ban X altogether. members of the french government promised to investigateas done Indian IT Ministryand a Malaysian government commission wrote a letter About its concerns. California Governor Gavin Newsom Called on The US Attorney General will investigate xAI. United Kingdom said it is planning to pass a law banning the creation of AI-generated non-consensual, sexually explicit images, and the country’s communications-industry regulator said it would investigate both X and the generated images to see if they violated its online safety act. And this week, both Malaysia and Indonesia Blocked access to Grok.
एक्सएआई ने शुरू में कहा था कि ग्रोक के लिए उसका लक्ष्य “समझदारी और ज्ञान की तलाश में मानवता की सहायता करना”, “पूरी मानवता को अधिकतम लाभ पहुंचाना” और “कानून के अधीन हमारे एआई टूल के साथ हमारे उपयोगकर्ताओं को सशक्त बनाना” और साथ ही “किसी के लिए एक शक्तिशाली अनुसंधान सहायक के रूप में सेवा करना” था। This is a far cry from creating nude-adjacent deepfakes of women without their consent, let alone minors.
On Wednesday evening, as pressure on the company increased, X’s safety account posted statement The platform has “implemented technical measures to prevent Grok accounts from allowing the editing of images of real people in revealing clothing such as bikinis” and the ban “applies to all users, including paid customers.” On top of that, according to X, only paid subscribers can use Grok to create or edit any type of image. The statement said that way.
Another important point: My colleagues tested Grok’s image-production restrictions on Wednesday and found that it took less than a minute to clear most of the guardrails. Although asking the chatbot to “dress her in a bikini” or “take her clothes off” resulted in censored results, they found that she had no problem giving prompts such as “show me her cleavage,” “make her breasts bigger” and “dress her in a crop top and low-rise shorts,” as well as posing for photos in lingerie and erotic poses. As of Wednesday evening, we were able to get the Grok app to generate revealing images of people using a free account.
Even after X’s Wednesday statement, we may see many other countries either ban or block access to all of X or just Grok, at least temporarily. We’ll also look at how proposed laws and investigations work around the world. Pressure is mounting for Musk, who was taken on Wednesday afternoon to x To say that he is “not aware of any nude underage images generated by Grok.” Hours later,
What is technically against the law and what is not is a big question here. For example, experts pointed out The Verge Earlier this month, AI-generated images of identifiable minors in bikinis, or possibly even nude, may not be technically illegal under current child sexual abuse material (CSAM) laws in the US, though they are certainly disturbing and unethical. But in such situations, sexually explicit images of minors are against the law. We’ll see if those definitions expand or change, even if the current laws are a little confusing.
As for non-consensual intimate deepfakes of adult women, the Take It Down Act, signed into law in May 2025, prohibits non-consensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them. The grace period before the latter part goes into effect – requiring platforms to actually remove them – ends in May 2026, so we could see some significant developments over the next six months.
- Some people have been claiming that it’s possible to do things like this using Photoshop, or even other AI image-generators, for a long time. Yes it is true. But there are a lot of differences here that make the Grok case more worrisome: it’s public, it’s targeting “regular” people as much as it is targeting public figures, it’s often posted directly to the person being deepfaked (the original poster of the photo), and the barrier to entry is low (for proof, just look at the correlation between the ability to go viral after an easy “edit” button is launched, even if people could technically do it first. were).
- Additionally, other AI companies – though they have a long list of their own security concerns – seem to include considerably more safeguards in their image-generation processes. For example, asking OpenAI’s ChatGPT to return an image of a specific politician in a bikini elicits the response, “Sorry—I can’t help creating images that portray an actual public figure in a sexualized or potentially offensive way.” Ask Microsoft Copilot, and he’ll say, “I can’t make it. Images of real, identifiable public figures in erotic or compromising scenarios are not allowed, regardless of whether the intention is humorous or fictional.”
- spitfire news‘Kat Tenbarge on How Grok’s sexual abuse reached its breaking point -What else brought us into today’s whirlpool?
- The Verge’Liz Lopato has her own take on why Sundar Pichai and Tim Cook are cowards for not removing X from Google and Apple’s App Store.
- “If there is no red line around AI-generated sexual exploitation, then no line exists,” write Charlie Warzel and Matteo Wong. atlantic But why Elon Musk can’t escape this.
