xAI’s Grok is removing clothes from people’s photos without their consent after this week’s rollout a characteristic Which allows X users to instantly edit any image using a bot without the original poster’s permission. Not only is the original poster not informed if his photo was edited, but it appears that Grok has some guardrails in place to prevent anything less than full explicit nudity. Over the past few days, X has been flooded with photos of women and children pregnant, skirtless, wearing bikinis or in other erotic situations. World leaders and celebrities have also had their likenesses used in images created by Grok.
AI Certification Company Copyleaks reported The trend of removing clothing from images began when adult-content creators asked Grok for sexy images of themselves after the release of the new image editing feature. Users then began applying similar signs to the photos of other users, primarily women, who had not consented to editing. Women noted a rapid increase in deepfake creation at various news outlets, including Metro And petapixelwas grok already enabled to modify images in a sexual manner when tagged in a post on X, but it appears that the new “Edit Image” tool has fueled the recent surge in popularity.
In an another x user Grok prompted to apologize For the “incident” involving “an AI image of two young girls (estimated age 12–16) in erotic attire”, Calling it a “failure in safeguards” it said it may have violated XAI’s policies and US law. (Although it is unclear whether the images created by Grok would meet this standard, realistic AI-generated sexually explicit images of recognizable adults or children could be illegal under US law.) In another back-and-forth with a user, Grok suggested that users report it to the fbi As for CSAM, noting that it is “immediately fixing” “deficiencies in security measures”.
But Grok’s words are nothing more than an AI-generated response to a user asking for a “heartfelt apology note” – it does not indicate that Grok “understands” what it is doing or necessarily reflect Operator XAI’s actual opinions and policies. Instead, xAI responded reuters, request for comment In just three words on the situation: “Legacy media lies.” XAI did not respond The VergeRequest for comment in time for publication.
It seems to have sparked a wave of bikini editing after Elon Musk himself asked Groke to replace a memorable image of actor Ben Affleck with his own. wearing a bikiniA few days later, North Korea’s Kim Jong Un’s leather jacket was replaced with multicolored spaghetti BikiniUS President Donald Trump was standing nearby in a matching swimsuit. (Jokes about nuclear war.) A photo of the British political Preeti Patel was posted by a user in 2022 with a sexually suggestive message, which turned into a bikini photo, on January 2. In response to the wave of bikini photos on his platform, Musk jokingly reposted a Picture A toaster in a bikini is titled “Groc can put a bikini on everything.”
While some images – such as the toaster – were clearly meant as jokes, others were clearly designed to generate borderline-pornographic imagery, including specific instructions for groks to use shorter bikini styles or remove skirts altogether. (The chatbot removed the skirt, but it did not depict full, uncensored nudity in the responses The Verge Saw.) Grok also complied with A’s request to change clothes. bikini baby,
Musk’s AI products are prominently marketed as highly sexy and minimally protective. Flirting with xAI’s AI companion Ani the verge Reporters Victoria Song and Jess Weatherbed found that despite XAI’s acceptability, Grok’s video generator easily created topless deepfakes of Taylor Swift. use policy Banning the depiction of “likenesses of persons in an obscene manner”. In contrast, Google’s Veo and OpenAI’s Sora video generator have guardrails against the creation of NSFW content, although Sora has also been used to create videos. children in sexual contexts And erotic videoAccording to a report by a cyber security firm, the prevalence of deepfake images is increasing rapidly deepstrikeAnd many of these images include non-consensual sexual images; A 2024 survey Among American students, 40 percent knew of someone they knew using a deepfake, while 15 percent knew of a non-consensual explicit or intimate deepfake.
Asked why it was turning images of women into bikini photos, Grok rejected Posting photos without consent Said: “These are AI creations based on requests, not actual photo editing without consent.”
Take the AI bot’s rebuttal as you wish.
