Grok is stripping children – can the law stop it?

by
0 comments
Grok is stripping children - can the law stop it?

Grok starts from 2026 As soon as 2025 begins: Under criticism for its AI-generated images.

Elon Musk’s chatbot has filled X with non-consensual, sexual deepfakes of adults and minors for the past week. Screenshots circulated showed Grok complying with requests by making real women wear lingerie and spread their legs and small children wearing bikinis. Later reports of the removed images described even more serious content. This was confirmed in a conversation with an ex-user The Verge They found several photos of minors with Prompter writing “doughnut glaze” on their faces, which appeared to have been later removed. At one point, Grok was almost producing a non-consensual sexual image per minutean estimated.

X’s Terms of Service Prohibition “Sexual abuse or exploitation of children.” and company on saturday the platform said “Will take action against illegal content on X, including child sexual abuse material (CSAM).” It appears to have eliminated some of the worst crimes. But overall, it has underestimated the events. Musk has Said that “anyone who uses Grok to create illegal content will suffer the same consequences as those who upload illegal content,” but he has made it clear through public X posts that he does not believe in general signals to take off clothes There is a problem, and they have responded to the broader topic Laugh And fire emoji On X. The company’s slow response has worried experts who have spent years trying to address AI-driven sexual harassment and abuse. there are many governments They said they were investigatingBut even amid unprecedented pressure for online regulation, the path to monitoring it or its chatbot’s creations remains unclear,

Grok’s maker xAI did not respond to a request for comment. Neither Apple or Google responded when asked whether the report violated their App Store policies.

Groke has always allowed, and Musk has also openly encouragedHighly erotic fantasy. But in the past week, the ability to ask Grok to edit images — via a new button that allows changes without the original poster’s permission — to undress women and minors has gone viral. The enforcement of guardrails has been very haphazard, and most of X’s predictable responses come from Grok himself, meaning they are essentially thought up on the spot. Answers include stating that Some of its creations were “against our guidelines for fictional content only” and, at the request of a user, a Widely reported apology – This appears to be something xAI hasn’t released itself.

One of the biggest questions here is whether the images violate laws against CSAM and non-consensual intimate images of adults (NCII), especially in the US, where X is headquartered. US Department of Justice prohibits “Digital or computer-generated images that are indistinguishable from those of an actual minor” that include sexual activity or suggestive nudity. And the Take It Down Act, signed into law by President Donald Trump in May 2025, prohibits non-consensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them.

Celebrities and influencers have described feeling humiliated by AI-generated sexualized images; According to the screenshots, Grok has created photos of TWICE singer Momo, actress Millie Bobby Brown, actor Finn Wolfhard, and many others. Grok-generated images are also being used to specifically attack women with political power,

“It’s a tool to express the underlying misogyny that runs rampant in every corner of American society and most societies around the world,” said Rhianna Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The Verge“It’s a violation of privacy, it’s a violation of consent and boundaries, it’s extremely intrusive, it’s a form of gender violence in its own way,” Perhaps above all, explicit images of minors – including through dedicated “nudify” apps – have become a growing problem for law enforcement,

On Monday, the Consumer Federation of America (CFA), a group of hundreds of consumer-focused nonprofits, publicly called for both state and federal action against XAI for “creating and distributing child sexual abuse material (CSAM) and other non-consensual intimate imagery (NCII) with generative AI.” Letter Signed by a handful of organizations, the Federal Trade Commission and the U.S. Attorney General.

Yet the specifics of what is prohibited by U.S. law are “quite vague,” said Mary Anne Franks, a professor of intellectual property, technology and civil rights law at George Washington University Law School. “Part of what I haven’t been able to figure out is whether it’s really crossing the line into actual nudity and sexual situations.”

Experts pointed out that using AI to generate an image of an identifiable minor in a bikini (or possibly nude) – although clearly unethical – may not be illegal under existing CSAM laws in the US. The VergeThat said, images that appear to include semen could violate both pre-existing CSAM laws and the Take It Down Act — and Franks suspects these aren’t the worst offenses, Franks said, “We can imagine that whatever is influencing the mainstream media, there are probably a million worse things that people are creating as well,,, Every possible signal you can think of is probably coming up,”

But despite these federal laws and an abundance of state-level laws, experts say it’s still difficult to enforce bans on AI-generated sexual imagery — and even harder to determine what responsibility the platforms may have. “There are ultimately conflicting laws, and there is no legal precedent for it,” said Shail Norris, founding executive director of SafeBAE, an organization that works to end sexual violence. The Verge,

John Langford, visiting clinical associate professor of law at Yale Law School and an attorney for Protect Democracy, said sexual deepfake bans have rarely been tested in court. “This is all new — we’re just starting to develop case law on what happens,” Langford said. But at least there are some standards: for grok creations to do Drew Davis, SafeBAE’s director of strategic initiatives, said, “By depicting identifiable minors, we now have a precedent (that) any computer-generated image of a real child that is sexually explicit is illegal.”

There are currently a handful of federal lawsuits for creating or possessing AI-modified images of real children, Pfefferkorn said, and several dozen lawsuits at the state level. “When it comes to whether the companies themselves are liable, I think we’re in uncharted territory,” Pfefferkorn said.

Davis said we’re dealing with a complex legal landscape when it comes to AI-generated images of minors. This is partly because the grace period for the “Take It Down” portion of the Take It Down Act, in which platforms must respond to such content, is until May.

Additionally, Section 230 has long protected companies from liability for content posted by others. But as companies turn to bots like Grok to allow users to create their own images, it’s not clear what liability they bear. “That’s why I’m so interested to see if there’s going to be constructive prosecution here,” Franks said. He further said, “It is about whether, by virtue of making these images, he has violated the criminal provision or not.”

Warning, many experts told The VergeVirtually all criminal statutes dictate that an offender must post material with knowledge that it is likely to cause harm. Yale Law’s Langford said that part “presents a really difficult question about whether you can hold Grok or XAI liable.” But, others say, in other situations personhood is attributed to corporations – why not this? Musk’s frequent, unfiltered postings also provide an unusual form of insight.

Pfefferkorn believes this will be “a significant year in terms of fighting this problem” and said he would not be surprised if class-action lawsuits emerge.

But to make things even more complicated, Musk and Beyond the US, the Trump administration has used trade talks to discourage other countries from regulating US internet platforms. Musk and Trump have a publicly good relationship, and any country that attempts to punish X would potentially face the administration’s ire, as well as potential non-compliance on X’s part.

Still, an international response is building. Members of the French government said they would investigate the matterIndian IT Ministry ordered the and Communications and Multimedia Commission of the Malaysian Government Said It “noted with grave concern” complaints about misuse of AI on X, particularly “digital manipulation of images of women and minors to produce indecent, highly offensive, or otherwise harmful content.”

Grok has sometimes gone off the rails in bizarre and repeatedly sexual ways, from its anti-Semitic breakdowns to allowing people to create partially nude images of Taylor Swift. Outside experts have expressed concerns about its slapdash security efforts – following the release of Grok 4 in July 2025, it took more than a month for the company to release a model card outlining things like security features and test results, which is generally seen as the bare minimum in the industry.

Without outside pressure, Grok’s deepfake problem seems unlikely to end any time soon. Some of the most serious images appear to have been removed after the fact. But larger guardrails, which are detailed in Grok 4.1 model card With brief mention of CSAM, apparently not working as planned. And Musk’s recent comments suggest he doesn’t see much wrong with Grok’s current status. One of the most puzzling things about the whole saga, Pfefferkorn said, is not that AI platforms might be motivated to create potential CSAMs — it’s that “we haven’t necessarily seen, so far, much concern about whether they’re getting right close to that line.”

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.


Related Articles

Leave a Comment