Liz Kendall’s response to ‘nudification’ is good – but not enough to solve the problem Nana Nwachukwu

by
0 comments
Liz Kendall's response to 'nudification' is good – but not enough to solve the problem Nana Nwachukwu

heyn This is a shocking violation of privacy, but it has now become a familiar and common practice. Between June 2025 and January 2026, I documented 565 instances of users requesting Grok to create non-consensual intimate imagery. Of these, 389 were requested in just one day.

Last Friday, following a backlash against the platform’s ability to create such non-consensual sexual images, X announced that Grok’s AI image creation feature would be available only to subscribers. Reports suggest that the bot no one responds anymore Prompting to generate images of women in bikinis (although obviously this will still be done for requests about men).

But as the technology secretary, Liz Kendall, has rightly said, this action “does not go far enough”. Kendall has announced that taking non-consensual intimate photographs will become a criminal offense this week, and that she will decriminalize the supply of nudification apps. This is reasonable given the weak response of X. Placing the feature behind a paywall means the platform can profit more directly from the online dehumanization and sexual harassment of women and minors. And stopping “bikini” reactions after public condemnation and threat of law is the least it could do to do X – the bigger question is why it was possible in the first place.

These measures are a step forward. The shadow technology secretary, Julia Lopez, suggested in her response that the government was over-reacting, that this was just “a modern-day iteration of an old problem”, no different from raw images or Photoshop. He is wrong. The scale is different. Access is different. The speed is different. For Photoshop, technical skills are required, as well as direct publishing by the user, placing all tasks except platform provisioning directly on them. However, in this case, the user replies with a regular text with the request, and Grok generates and publishes the criminal abuse to a mass audience.

Kendall’s approach criminalizes the users who create or alter these images and the companies that supply dedicated nudification tools. This is where the point is missed. Grok and most major image creation tools do not have dedicated nudging tools. They are general-purpose AI with weak security measures. Kendall is not asking platforms to implement active identity. The law waits for harm to happen and then punishes.

The shortcomings of this approach are obvious. I saw this material being created for months before the mainstream reaction began. These are harmful images that were generated that still exist, and were probably saved and shared on other platforms. For the victims of this AI sexual exploitation material, regulation after the fact will not help. The approach to structurally exacerbated harms must thus be preventive, not reactionary.

Another fundamental problem is that while the UK is pushing for AI safety regulation, the US is moving in the opposite direction. trump administration “Seeks to enhance the United States’ global AI dominance through a minimal-burden national policy framework for AI”. Under this framework, US AI companies have little incentive to regulate misuse of their products. This matters because AI regulation is incomplete without cross-border cooperation. Kendall could criminalize users in the UK, threatening to ban X entirely. But that can’t stop Grok from getting the program in San Francisco. It can’t force OpenAI or Anthropic or any other American company to prioritize security over speed. Without US cooperation, we are attempting to regulate an international technology with national laws.

While this tug-of-war over regulation and updated policy is ongoing, many victims and other online women may be wondering what this new era of AI-enabled online sexual harassment means for them, and questioning their participation on global social media platforms. If my image has been digitally altered, how will I get justice if the culprit is half a world away? There is transparency in the functioning of AI companies in the fall – So how can the same companies be trusted to be accountable and audit systems that cause harm?

The truth is that these companies cannot be trusted. This is why globally, regulation needs to move from “remove harm when found” to “prove that your system prevents harm”. We must codify power into the process by requiring mandatory input filtering, independent audits, and licensing terms that make prevention a legal technical requirement. This means it can catch harm before it happens, enabling regulation to reduce harmful behavior by these AI companies before they deploy their products. This is the kind of work we do AI Accountability Lab In Customization Center Leading the way through our research at Trinity College Dublin.

Regulation after the fact is better than nothing. However, it offers little to victims who have already been harmed, and ignores the glaring absence of law enforcement in addressing these platform harms.

Related Articles

Leave a Comment