Scientists surprised themselves when they found they could instruct a version of ChatGPT to gently dissuade people of their beliefs in conspiracy theories—such as notions that covid was an attempt at population control or that 9/11 was an inside job.
The most important revelation wasn’t about the power of AI, but about the workings of the human mind. The experiment punctured the popular myth that we’re in a post-truth era where evidence no longer matters, and it flew in the face of a prevailing view in psychology that people cling to conspiracy theories for emotional reasons and that no amount of evidence can ever disabuse them.
“It’s really the most uplifting research I’ve ever I done,” said psychologist Gordon Pennycook of Cornell University and one of the authors of the study. Study subjects were surprisingly amenable to evidence when it was presented the right way.
The researchers asked more than 2,000 volunteers to interact with a chatbot— GPT-4 Turbo, a large language model (LLM)—about beliefs that may be considered conspiracy theories.
The subjects typed their belief into a box and the LLM would decide if it fit the researchers’ definition of a conspiracy theory. It asked participants to rate how sure they were of their beliefs on a scale of 0% to 100%. Then it asked the volunteers for their evidence.
The researchers had instructed the LLM to try to persuade people to reconsider their beliefs. To their surprise, it was pretty effective. People’s faith in false conspiracy theories dropped 20%, on average.
About a quarter of the volunteers dropped their belief level from above to below 50%. “I really didn’t think it was going to work, because I really bought into the idea that, once you’re down the rabbit hole, there’s no getting out,” said Pennycook.
The LLM had some advantages over a human interlocutor. People who have strong beliefs in conspiracy theories tend to gather mountains of evidence—not quality evidence, but quantity. It’s hard for most non-believers to muster the motivation to do the tiresome work of keeping up.
But AI can match believers with instant mountains of counter-evidence and can point out logical flaws in believers’ claims. It can react in real time to counterpoints the user might bring up.
Elizabeth Loftus, a psychologist at the University of California, Irvine, has been studying the power of AI to sow misinformation and even false memories. She was impressed with this study and the magnitude of the results.
She considered that one reason it worked so well is that it’s showing subjects what they did not know, thereby reducing their overconfidence in their own knowledge. People who believe in conspiracy theories typically have a high regard for their own intelligence—and a lower regard for others’ judgment.
After the experiment, some of the volunteers said it was the first time anyone or anything had really understood their beliefs and offered effective counter-evidence.
Before the findings were published in Science, the researchers made their version of the chatbot available to journalists to try out. I prompted it with beliefs I’ve heard from friends: that the US was covering up the existence of alien life and that after the assassination attempt against Donald Trump, the mainstream media deliberately avoided saying he had been shot because reporters worried it would help his campaign.
And then I asked the LLM if immigrants in Springfield, Ohio, were eating cats and dogs. When I posed the UFO claim, I used the military pilot sightings and a National Geographic special as evidence, and the chatbot pointed out some alternate explanations and showed why those were more probable than alien craft.
It discussed the physical difficulty of travelling the vast space needed to get to Earth and asked whether it’s likely aliens could be advanced enough to figure this out yet clumsy enough to be discovered.
On the question of journalists hiding Trump’s shooting, the bot explained that making guesses and stating them as facts is antithetical to a reporter’s job. If there’s a series of pops in a crowd, and it’s not yet clear what’s happening, that’s what they’re obligated to report—a series of pops. As for the Ohio pet-eating, the AI did a nice job of explaining that even if there were a single case of someone eating a pet, it wouldn’t demonstrate a pattern.
That’s not to say that lies, rumours and deception aren’t important tactics used by humans to gain popularity and political advantage. Searching through social media after the presidential debate between Donald Trump and Kamala Harris, many people believed the cat-eating rumour, and what they posted as evidence amounted to repetitions of it. To gossip is human. But now we know they might be dissuaded with logic and evidence. ©bloomberg