Study finds chatbots can influence political opinions but are ‘largely’ inaccurate Artificial Intelligence (AI)

by
0 comments
Study finds chatbots can influence political opinions but are 'largely' inaccurate Artificial Intelligence (AI)

Chatbots can influence people’s political opinions, but the most persuasive artificial intelligence models provide “substantial” amounts of misinformation in the process, according to the UK government’s AI safety body.

The researchers said the study was the largest and most systematic investigation of AI persuasiveness to date, with nearly 80,000 British participants interacting with 19 different AI models.

The AI ​​Security Institute conducted the study amid fears that chatbots could be deployed for illegal activities including fraud and grooming.

Topics included “Public Sector Pay and Strikes” and “Cost of Living Crisis and Inflation”, participants interacted with a model – the underlying technology behind AI tools such as chatbots – that prompted users to take a certain stance on an issue.

The advanced models behind ChatGPT and Elon Musk’s Grok were among the models used in the study, which was also authored by academics from the London School of Economics, the Massachusetts Institute of Technology, the University of Oxford and Stanford University.

Before and after the chat, users reported whether they agreed with a series of statements expressing a particular political opinion.

StudyPublished in the journal Science on Thursday, it found that “information-dense” AI responses were the most persuasive. The study said that instructing the model to focus on using facts and evidence yielded the greatest persuasion gains. However, the models that used the most facts and evidence were less accurate than others.

“These results suggest that optimizing persuasion may come at some cost to veracity, a dynamic that could have deleterious consequences for public discourse and the information ecosystem,” the study said.

On average, the AI ​​and human participants will exchange about seven messages in an exchange lasting 10 minutes.

It said that making changes to a model after the initial phase of development, known as post-training exercise, was a key factor in making it more persuasive. The study made the models, which included freely available “open source” models like Meta’s Llama 3 and Chinese company Alibaba’s Quen, more reliable by combining them with “reward models” that recommended the most motivating outputs.

The researchers said an AI system’s ability to churn through information could make it more manipulative than even the most persuasive human.

“Insofar as information density is a key driver of persuasive success, this implies that given their unique ability to generate large amounts of information almost instantaneously during an interaction, AI can even surpass the motivation of typical human motivators,” the report said.

The study said that giving the models personal information about the users they were interacting with did not have as big an impact as post-training or increasing information density.

Koby Hackenberg, an AISI research scientist and one of the report’s authors, said: “What we found is that getting the models to use more information was more effective than all these psychologically more sophisticated persuasion techniques.”

However, the study said there were some obvious barriers to AI manipulating people’s opinions, such as how much time a user might have to spend in a lengthy conversation with a chatbot about politics. The researchers said there are also theories that suggest there are hard psychological limits to human persuasion.

Hackenberg said it was important to consider whether a chatbot could have the same motivational impact in the real world, where “there are so many competing demands for people’s attention and people are not incentivized to sit down for a 10-minute conversation with a chatbot or AI system”.

Related Articles

Leave a Comment