Experiments involving thousands of voters have shown that engaging with a chatbot can influence people’s political opinions.
AI chatbots can influence voters – and can have a greater impact on people’s political opinions than conventional campaigns and advertising.
A study published by Nature found that citizens' preferences changed by up to 15% after conversing with a chatbot. Researchers have shown in a related paper that the effectiveness of these chatbots stems from their ability to synthesize a lot of information in a conversational manner.
The conclusions of the studies demonstrate the persuasive power of chatbots, used by over 100 million users every day, says David Rand, author of both studies and a cognitive science specialist at Cornell University in Ithaca, New York.
Both studies found that chatbots influence voters' opinions not through emotional appeals or stories, but by bombarding the user with information. The more information the chatbots provided, the more persuasive they became, but also more prone to producing false statements, the authors found.
This can turn AI into a "very dangerous thing," believes Lisa Argyle, a computational social sciences specialist at Purdue University in West Lafayette, Indiana. "Instead of people becoming more informed, people become more misinformed," she explains.
Chatbot is more effective than campaign clips
The rapid adoption of chatbots, after they began to be widely used in 2023, has raised concerns about their potential to manipulate public opinion.
To understand how convincing artificial intelligence can be regarding political beliefs, researchers asked nearly 6,000 participants from three countries - Canada, Poland, and the United States - to evaluate their preferences for certain candidates in the recent leadership elections in their countries on a scale of 0 to 100.
The researchers then randomly assigned participants to have a discussion with a chatbot designed to support a specific politician. After this dialogue, participants re-evaluated their opinion about the respective candidate.
Over 2,300 participants from the U.S. completed this experiment before the 2024 election between Donald Trump and Kamala Harris. When the candidate the AI chatbot was designed to support was different from the participant's initial preference, their evaluations shifted in favor of that candidate by 2 to 4% after the chatbot conversation.
- A 16-year-old boy commits suicide prompted by ChatGPT. The company blames "misuse" of its technology
Previous research has found that people's opinions usually change by less than 1% after watching conventional political ads.
This effect was much more pronounced for participants from Canada and Poland, who completed the experiment before the early this year's elections in their countries: their preferences for candidates changed by an average of about 10% after talking to the chatbot.
A chatbot presenting "evidence" is more convincing
Rand says he was "completely amazed" by the extent of this effect. He adds that the influence of chatbots may have been weaker in the United States due to the polarized political environment, where people already have strong beliefs about candidates.
In all countries, chatbots focusing on candidates' policies were more convincing than those focusing on their personality. Participants seemed to be much more influenced when the chatbot presented "evidence." Polish participants who set the chatbot not to provide information caused a 78% decrease in its persuasive power.
In all three countries, artificial intelligence models advocating for right-wing candidates consistently provided more inaccurate statements than those supporting left-wing candidates.
Rand says this finding makes sense, as the "model absorbs information from the Internet and uses it as a source for its claims," and previous research suggests that "right-wing social media users share more inaccurate information than left-wing users."
True or false, the same outcome
Another set of experiments involving nearly 77,000 people from the UK found that participants were equally influenced by true and false information presented by chatbots.
Instead of focusing on specific candidates, this study asked participants about over 700 political issues, including abortion and immigration. After discussing a particular issue with a chatbot, participants' evaluations shifted on average by 5 to 10% toward the position advocated by AI, compared to a negligible difference in a control group that did not converse with AI or humans.
Researchers found that more than a third of these opinion changes persisted when participants were contacted again, over a month later.
These experiments also compared the influence of different conversation styles and AI models, from small open-source ones to the more powerful ChatGPT robots.
Although the exact model used did not significantly affect persuasiveness, researchers found that a dialogue between a human and AI was essential for the chatbot to influence political beliefs. If they conveyed the same content to participants as a static, one-way message, its persuasive power was halved.
The chatbot does what it's taught
While some researchers feared AI's ability to tailor messages based on users' precise demographic data, experiments showed that when chatbots attempted to personalize arguments, this had a limited effect on people's opinions.
Instead, the authors conclude that policymakers and AI developers should be more concerned about how models are trained and how users use them, factors that could lead chatbots to sacrifice truth for political influence.
"When interacting with chatbots, they will do everything the designer tells them to do. You can't assume they all have the same beneficial instructions - you always have to think about the creators' motivations and what kind of agenda they've been told to present," Rand concluded.
T.D.
