Millions of people around the world are creating and sharing deepfake nudes on the secure messaging app Telegram, Guardian analysis has revealed, as the proliferation of advanced AI tools industrialises the online abuse of women.
The Guardian has identified at least 150 Telegram channels – large encrypted group chats popular for their secure communications – that have users in countries ranging from the UK to Brazil, China to Nigeria, Russia to India. Some of them offer “nude” photos or videos for a fee: users can upload a photo of any woman, and AI will generate a video of that woman performing a sexual act. Many others offer a feed of images of celebrities, social media influencers and ordinary women – created by AI to pose nude or perform sexual acts. Followers are also using the channels to share tips on available deepfake tools.
While Telegram channels have existed for a long time Dedicated The widespread availability of AI tools to distribute non-consensual nude images of women means that anyone can instantly become the subject of graphic sexual material viewed by millions of people.
On a Russian-language Telegram channel advertising deepfake “blogger leaks” and “celebrity leaks”, a post about an AI nudification Telegram bot promised “a neural network that doesn’t know the word ‘no'”.
It said, “Choose the position, size and location. Do everything with it that you can’t do in real life.”
On a Chinese-language Telegram channel with about 25,000 subscribers, men shared videos of their “first love” or their “girlfriend’s best friend,” which were created using AI to undress.
A network of Telegram channels targeted at Nigerian users spread hundreds of stolen nudes and intimate images as well as deepfakes.
Telegram is a secure messaging app that allows users to create groups or channels to broadcast content to an unlimited number of contacts. under the stage terms of ServiceUsers cannot post “illegal pornographic content” on “publicly viewable” channels and bots, or “engage in activities that are considered illegal in most countries.”
Independent analysis and review of database service Telemetr.ioData from , which contains an index of such channels, indicates that Telegram has shut down several nudification channels.
Telegram told the Guardian that deepfake pornography and the tools used to create it are explicitly prohibited by its terms of service, adding that “such content is routinely removed whenever discovered. Moderators empowered with custom AI tools actively monitor public parts of the platform and accept reports to remove content that violates our terms of service, including those that encourage the creation of deepfake pornography.”
Telegram said in its statement that it removed more than 952,000 pieces of objectionable content in 2025.
In recent weeks, Elon Musk’s social media platforms Images Wearing bikini or minimal clothing without women’s consent.
The resulting outrage led Musk’s artificial intelligence company, xAI, to announce that it would stop allowing Grok to edit photos of real people in bikinis. UK media regulator Ofcom also announced an investigation into X.
But there is a trove of forums, websites and apps, including Telegram, that provide millions of people easy access to graphic, non-consensual content – and allow this content to be generated and shared on demand, without the knowledge of the women who are being violated. A report was issued on tuesday The Tech Transparency Project found that there are dozens of nudity apps available in the Google Play Store and Apple App Store, and collectively they have had 705m downloads.
An Apple spokesperson said the company had removed 28 of the 47 nudification apps identified by the Tech Transparency Project in its investigation, while a Google spokesperson said “the majority of apps” on their service had been suspended, pending an investigation.
Anne Crannan, a researcher focused on gender-based violence at the London-based Institute for Strategic Dialogue, said Telegram channels are a mainstay of a broader Internet ecosystem dedicated to creating and circulating non-consensual intimate images.
They allow users to escape the control of large platforms like Google, and share tips on how to bypass safeguards that prevent AI models from generating this content. But “the dissemination and celebration of this content is another part”, he said. “Broadcasting it with other men and bragging about it, and that celebration aspect is also really important. It really shows the misogynistic tone of it. They’re trying to punish women or silence women.”
Last year, Meta shut down Italian Facebook Group In which men shared intimate photos of their partners and women. Before the group is deleted, mia mowgli (meaning “my wife”), had about 32,000 members.
However, investigative newspapers indicator Found that Meta has failed to stop the flow of ads for AI nudification tools on its platforms, and identified at least 4,431 nudifier ads on its platforms since December 4 last year, although some appear to be scams. A spokesperson for Meta said that it removes ads that violate its policies.
AI tools have accelerated the global increase in online violence against women, allowing almost anyone to create and share humiliating images. In many jurisdictions, including much of the Global South, few legal avenues exist to hold perpetrators accountable. As of 2024, less than 40% of countries have laws protecting women and girls from cyberharassment or cyberstalking. world bank data. UN estimates 1.8 billion women and girls still lack legal protection from online harassment and other forms of technology-facilitated abuse
Campaigners say lack of regulation is just one reason why women and girls in low-income countries are particularly vulnerable. Issues such as poor digital literacy and poverty can increase the risk. Ugochi Ihe, a partner at Techhar, a Nigeria-based organization that encourages women and girls to learn and work with technology, says she has come across cases where women who borrowed money from loan apps have become victims of blackmail by “unscrupulous men who use AI”. Every day it is becoming more creative with abuse.
The real-life consequences of digital abuse are devastating, including mental health difficulties, isolation and loss of work.
“These things are bound to destroy a young girl’s life,” said Mercy Mutemi, a Kenya-based lawyer who represents four victims of deepfake abuse. Some of his clients have been denied jobs and faced disciplinary hearings at school, he said, all because of deepfake images circulated without their consent.
Ihe said his organization had handled complaints from women who were ostracized by their families after receiving threats from nude and intimate images obtained from Telegram channels.
“Once it’s out, it’s not possible to recover your dignity, your identity. Even if the perpetrator comes forward to say, ‘Oh, it was a deepfake,’ you can’t tell how many people have seen it. The damage to reputation is irreparable.”
