The study found that deepfake fraud is happening on an industrial scale. deepfake

by
0 comments
The study found that deepfake fraud is happening on an industrial scale. deepfake

An analysis published by AI experts says that deepfake fraud has become “industrialized”.

Tools to create customized, even personalized scams – for example, leveraging deepfakes Video Swedish journalists or chairman Cypriots – no longer specialized, but cheaper and easier to deploy at scale, as stated in the analysis AI Incident Database.

It lists more than a dozen recent examples of “impersonation for profit”, including a deepfake video of Western Australia premier Robert Cook promoting an investment scheme and a video of deepfake doctors. Promoting Skin Cream.

These examples are part of a trend in which scammers are using widely available AI tools to perpetrate increasingly targeted robberies. Last year, a finance executive at a Singapore multinational paid out nearly $500,000 to scammers. It is believed that There was a video call with company leadership. UK consumers are estimated to have lost Fraud worth £9.4 billion in nine months to November 2025.

“Capabilities have suddenly reached a level where fake content can be generated by almost anyone,” said Simon Mylius, an MIT researcher who works on a project involving the AI ​​Incident Database.

They calculated that “fraud, scams and targeted manipulation” accounted for the largest share of incidents reported in the database in 11 of the last 12 months. He said: “It has become very accessible to a point where there is really no barrier to entry.”

“The scale is changing,” said Fred Heiding, a Harvard researcher who studies AI-powered scams. “It’s becoming so cheap that almost anyone can use it now. The models are getting really good – they’re getting faster than most experts thought.”

In early January, Jason Rebholz, chief executive of AI security company Evoke, posted a job offer on LinkedIn and was immediately contacted by a stranger in his network who recommended a candidate.

Within days, he was exchanging emails with someone who, on paper, appeared to be a talented engineer.

“I looked at the resume and I thought, this is a really great resume. And so I thought, even though there were some red flags, let me just have the conversation.”

Then things got weird. Candidate emails went straight to spam. There were oddities in his resume. But Rebholz had dealt with unusual candidates before and decided to proceed with the interviews.

Then, when Rebholz picked up the call, it took about a minute for the candidate’s video to appear.

“The background was extremely fake,” he said. “It just looked super, super fake. And it was really struggling to deal with (areas around) the edges of the person. Like part of his body was coming in and out… and then when I’m looking at his face, it’s very soft around the edges.”

Rebholz continued the conversation, not wanting to face the awkwardness of directly asking the candidate if it was really an elaborate scam. Later, he sent the recording to a contact at a deepfake detection firm, who told him that the video image of the candidate was AI-generated. They rejected the candidate.

Rebholz still doesn’t know what the scammer wanted — engineering salaries, or trade secrets. while there are reports North Korean hackers trying to get a job at Amazon Evoke is a startup, not a big player.

“It’s like, if we’re being targeted by this, then everyone is being targeted by this,” Rebholz said.

Heiding said the worst is ahead. Deepfake voice cloning technology is currently excellent – ​​making it easy for scammers to impersonate a grandchild in distress over the phone. On the other hand, deepfake videos still have room for improvement.

This could have huge consequences: for appointments, for elections, and for wider society. Heiding added: “That will be the big pain point here, the complete lack of trust in digital institutions and institutions and content in general.”

Related Articles

Leave a Comment