AI is already making online fraud easier. It could be worse.

by
0 comments
AI is already making online fraud easier. It could be worse.

Cherepanov and Strysk were confident that their discovery, which they dubbed PromptLock, marked a turning point in generative AI, showing how the technology could be used to create highly flexible malware attacks. he published a blog post Announcing that they had discovered the first example of AI-powered ransomware, it soon became the aim of on a large scale global media Attention.

But the danger was not as dramatic as it first appeared. The day after the blog post went live, a team of researchers from New York University claimed responsibility, Explaining that the malware was not, in fact, a full-fledged attack released into the wild, but rather a research project, designed only to prove that possible To automate every step of the ransomware campaign – which, they said, they had.

PromptLock may have turned out to be an academic project, but the real bad guys Are Using the latest AI tools. Just as software engineers are using artificial intelligence to help write code and investigate bugs, hackers are using these tools to reduce the time and effort required to organize an attack, lowering the barriers for less experienced attackers to try something.

The possibility that cyberattacks will now become more common and more effective over time is not a distant possibility, but “a sheer reality”, says Lorenzo Cavallaro, professor of computer science at University College London.

Some in Silicon Valley have warned that AI is on the verge of being able to carry out fully automated attacks. But most security researchers say this claim is exaggerated. “For some reason, everyone is focusing on this malware idea of ​​like AI superhackers, which is absolutely absurd,” says Marcus Hutchins, principal threat researcher at security company Expel and famous in the security world for bringing down a massive global ransomware attack called WannaCry in 2017.

Instead, experts argue, we should pay more attention to the more immediate risks posed by AI, which are already accelerating and increasing the volume of scams. Criminals are increasingly using the latest deepfake technologies to impersonate people and defraud victims of large sums of money. These AI-enhanced cyberattacks are only going to become increasingly more destructive, and we need to be prepared.

Spam and beyond

Attackers began adopting generic AI tools soon after the ChatGPT explosion in late 2022. These efforts, as you can imagine, started with the creation of spam – and a lot of it. A report from last year Microsoft said In the year leading up to April 2025, the company had blocked $4 billion worth of scams and fraudulent transactions, “likely aided by AI content.”

According to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, at least half of spam emails are now generated using LLM, which Analysis Approximately 500,000 malicious messages were collected before and after the launch of ChatGPT. They also found evidence that AI is increasingly being deployed in more sophisticated schemes. They looked at targeted email attacks that impersonate a trusted person to trick a worker within an organization out of money or sensitive information. By April 2025, they found that at least 14% of types of focused email attacks were generated using LLM, up from 7.6% in April 2024.

Related Articles

Leave a Comment