A high-profile group of experts on AI and online misinformation have warned that political leaders could soon launch swarms of human-mimicking AI agents to shape public opinion in a way that risks undermining democracy.
Nobel Peace Prize-winning free-speech activist Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge, and Yale have joined a global consortium flagging the new “disruptive threat” posed by malicious “AI swarms” infecting social media and messaging channels.
Amid predictions that the technology could be deployed on a large scale by the time of the US presidential election in 2028, he said a future dictator could use such swarms to persuade the population to accept canceled elections or overturn the results.
The warnings, published today in Science, Come up with a call for coordinated global action to combat the risk, including “swarm scanners” to counter AI-powered misinformation campaigns and watermarked content. Early versions of AI-powered influence operations have been used in the 2024 elections in Taiwan, India, and Indonesia.
“A disruptive threat is emerging: swarms of collaborative, malicious AI agents,” the authors said.. “These systems are able to coordinate autonomously, infiltrate communities, and efficiently build consensus. By adaptively mimicking human social dynamics, they threaten democracy.”
Inga Trauthigh, a leading expert on campaign technology, said the reluctance of politicians to hand over control of campaigns to AI is likely to slow the adoption of such advanced technology. Another reason for skepticism is the concern that using such illegal techniques would not be worth the risk, as voters are still more influenced by offline content.
Experts behind the warning include Gary Marcus of New York University, a leading skeptic about the claimed capability of existing AI models, who calls himself a “generative AI realist”, and Audrey Tang, Taiwan’s first digital minister, who warned: “People in the pay of authoritarian forces are undermining electoral processes, weaponizing AI and using our social powers against us.”
Others include David Garcia, professor of social and behavioral data science at the University of Konstanz, Sander van der Linden, misinformation expert and director of the University of Cambridge’s Social Decision-Making Laboratory, and Christopher Summerfield, AI researcher and professor of cognitive neuroscience at the University of Oxford.
Together they say political leaders could deploy an almost unlimited number of AIs masquerading as humans online and infiltrating communities, learning their weaknesses over time and using increasingly convincing and carefully crafted lies to change population-wide opinion.
This threat is being exacerbated by advances in AI’s ability to understand the tone and content of discourse. They are increasingly able to mimic human dynamics, for example, by using appropriate slang and posting irregularly to avoid detection. Progress in the development of “agent” AI also means the ability to autonomously plan and coordinate actions.
As well as working on social media, they can use messaging channels and even write blogs or use email, depending on which channel the AI best helps achieve the goal, said one of the authors, Daniel Thilo Schroeder, a research scientist at the Sintef research institute in Oslo.
Schroeder, who is simulating the swarms in laboratory conditions, said, “It’s terrifying how easy these things are to code and just have little bot armies that can actually navigate online social media platforms and email and use these tools.”
Another of the authors, Jonas Kunst, professor of communications at BI Norwegian Business School, said: “If these bots start to evolve into a collective and exchange information to solve a problem – in this case a malicious goal, namely analyzing a community and finding a weak spot – then coordination will increase their accuracy and efficiency.
“This is a really serious threat that we anticipate is going to materialize.”
In Taiwan, where voters are regularly targeted by Chinese propaganda, often unknowingly, AI bots have been increasingly engaging with citizens in threads and on Facebook over the past two to three months, said Puma Shen, a Taiwanese Democratic Progressive Party lawmaker and campaigner against Chinese disinformation.
When discussing political topics, AIs provide “a lot of information that you can’t verify,” creating “information overload,” Shen said. He said the AI could cite fake articles about how the US would abandon Taiwan. Another recent trend is that AI bots stress to young Taiwanese that the China-Taiwan dispute is very complex “so don’t take sides if you don’t have any information”.
“It’s not telling you that China is great, but it’s (encouraging them to remain neutral),” Shen told the Guardian. “It’s very dangerous, because then you think people like me are radicals.”
Amid signs that progress in AI technology is not as fast as Silicon Valley companies like OpenAI and Anthropic have claimed, the Guardian asked independent AI experts to assess Swarm’s warnings.
“There was potential for AI-powered microtargeting in the election-heavy year of 2024, but we didn’t see as much as scholars had predicted,” said Trauthigh, an adviser to the International Panel on the Information Environment. “Most of the political campaigners I interviewed are still using old technologies and are not at this cutting edge.”
“It’s not hypothetical,” said Michael Wooldridge, professor of the foundations of AI at the University of Oxford. “I think it is entirely plausible that bad actors would try to mobilize virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion, for example by targeting large numbers of individuals on social media and other electronic media. This is entirely technologically possible…as the technology has become progressively better and more accessible.”
