AI chatbots can influence voters better than political ads

by
0 comments
AI chatbots can influence voters better than political ads

“A conversation with an LLM has a fairly meaningful impact on major election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on it. Nature Study. He says that LLMs can motivate people more effectively than political ads because they generate so much information in real time and strategically deploy it in conversations.

For Nature paperResearchers recruited more than 2,300 participants to engage in conversations with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for one of the top two candidates, was surprisingly persuasive, especially when discussing the candidates’ policy platforms on issues like the economy and health care. Donald Trump supporters who chatted with an AI model in favor of Kamala Harris became slightly more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. This was almost four times the measured impact of political advertisements during the 2016 and 2020 elections. The AI ​​model biased Harris supporters toward Trump by 2.3 points.

In similar experiments conducted during the lead-up to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even larger effect. Chatbots changed the attitudes of opposition voters by about 10 points.

Long-standing theories of politically motivated reasoning hold that partisan voters are impervious to facts and evidence that contradict their beliefs. But researchers found that chatbots, which used several models including variants of GPT and DeepSeq, were more persuasive when they were instructed to use facts and evidence than when they were told not to. “People are making updates based on the facts and information that the model is providing them,” says Thomas Costello, a psychologist at American University who worked on the project.

The problem is that some of the “evidence” and “facts” presented by chatbots were false. In all three countries, chatbots advocating right-leaning candidates made more false claims than those advocating left-leaning candidates. Costello says the underlying models are trained on large amounts of human-written text, meaning they reproduce real-world events — including “political communication that comes from the right, which tends to be less accurate,” according to the study of partisan social media posts.

In another study published this week, In ScienceAn overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to interact with approximately 77,000 participants from across the UK on over 700 political issues with a variety of factors such as computational power, training techniques and rhetorical strategies.

The most effective way to make models persuasive was to instruct them to pack their arguments with facts and evidence and then give them additional training by giving them examples of persuasive conversations. In fact, the most persuasive model moved participants who initially disagreed with a political statement 26.1 points toward agreeing. “These are really big treatment effects,” says Koby Hackenberg, a research scientist at the UK AI Safety Institute who worked on the project.

Related Articles

Leave a Comment