Political misinformation is a hard problem. False statements pervade contemporary politics, sowing division and distrust, and making it harder for society to operate on the basis of fact and law.
Even in matters of health and medicine, where people would seem to have a strong self-interest in knowing the facts, problems such as vaccine misinformation abound. So, what can be done to battle the false stories floating all around us?
Misinformation is durable and highly resistant to small-bore solutions, and it will take a consistent, multifaceted effort to make progress on this front, political scientist Adam Berinsky told an MIT audience on Monday.
“Rumors are sticky,” said Berinsky, who was one of the first scholars to run rigorous experiments about misinformation, starting about 15 years ago.
In his lecture, Berinsky outlined what we know about political rumors, and how we might tackle the problem. Significant pluralities of the U.S. public believe many false statements; another chunk of the public remains noncommittal about them. As Berinsky’s research has helped establish, corrections by neutral fact-checkers have little impact on the phenomenon. A better debunking tactic involves having political leaders issue corrections in circumstances when it does not benefit their self-interest — but such interventions can be hard to create, and setting the record straight may only have short-lived effects.
“It is really hard to correct misinformation,” Berinsky said. “But it doesn’t mean that we should give up.” Instead, he added, experts and leaders need to be “thinking collaboratively about how we can bring all [our] different research together, to come up with solutions to a problem that’s not going to go away.”
Berinsky delivered his lecture, “Why We Accept Misinformation and How to Fight It,” to an audience of about 175 people in Huntington Hall, Room 10-250, on the MIT campus. His talk was followed by moderated small-group discussions. Chancellor Melissa Nobles gave introductory remarks at the outset of the event.
“Misinformation, as we know, encourages polarized thinking,” Nobles said. “It divides us. It creates conditions where hate can grow and fester. And when we can’t agree on what’s true and what isn’t, it’s hard for us to talk with one another. It’s hard to evaluate complex issues. It is hard for us to use our problem-solving skills and our humanity to arrive at common ground.”
Berinsky is the Mitsui Professor of Problems in Contemporary Technology in MIT’s Department of Political Science and the director of the Political Experiments Research Lab. He has co-authored dozens of research papers and is the author of the book “Political Rumors,” published by Princeton University Press in 2023. He has been on the MIT faculty since 2003 and has won a Guggenhein Fellowship, among other honors.
One key facet of the battle against misinformation, Berinsky observed, is that qualified experts are often held in lower esteem now than in the past, a trend that cuts across U.S. political parties. Additionally, while false political rumors are an age-old phenomenon, it seems clear that social media has exacerbated the problem.
“I’m not a huge fan of arguing that technology changes everything,” Berinsky said. “But certainly, technology changes certain things. One of these is the way that these rumors spread.”
Most U.S. citizens are also not following politics closely, at least on an ongoing basis, which may also help false rumors gain traction. Political misinformation often exploits, and then reinforces, partisan divides.
“Most of the people, most of the time, don’t pay attention to politics, but they do pick up on what’s going on in their social circles and what’s going on around them,” Berinsky said. “So, if you’re in an environment where you’re hearing these kinds of stories, this is potentially very problematic.”
As a central case study in his talk, Berinsky outlined the false assertion that the Obama administration’s Affordable Care Act contained “death panels” to decide if the elderly might no longer be granted care. That was never the case. The legislation did have a provision making counseling available about late-in-life care options.
Studying this controversy helped Berinsky observe the partisan split often present in political rumors, as well as the potential usefulness of different remedies. Interventions by neutral fact-checkers did little to change beliefs on the subject, but corrections by Republicans (a GOP senator had co-authored the late-in-life counseling provision) helped induce a 10-percentage point increase in those accepting the actual facts. In general, people who change their beliefs mostly come from the pool of uncommitted observers; those who believe false claims tend not to move off those positions.
“The people who say they believe these things really believe them,” Berinsky said. “And so that’s a problem for democracy.”
Partisanship, as well as a tendency to believe in conspiracies, helps shape whether particular individuals believe false claims, including the incorrect assertion that President Obama was not a U.S. citizen, circulated largely by Republicans; the notion that the terrorist attacks of Sept. 11, 2001 were some sort of “inside job,” which Democrats have been more likely to believe; and false claims by many about the validity of the 2020 election.
It might be that in each separate case, a slightly different approach to countering misinformation might be most effective. For starters, that means carefully considering which kinds of political leaders might, in theory, best correct the record — although even so, there are no guarantees that a given politician will persude even their own partisans.
“Given the world we live in, we really need to think about who delivers the message and how can we have an effective message delivery system,” Berinsky said.
More broadly, he added, “We know now that there’s not going to be one magic bullet that will solve everything.”
That means thinking about what combination of messaging, messengers, and tools might make a dent in political falsehoods. Even if the result just changes public opinions by a few percentage points, that kind of change could well be worth pursuing.
After his talk, Berinsky fielded questions about the possibility of restoring trust in experts, and the deep historical roots of some rumors; he also received a query about the way AI might exacerbate the spread of misinformation. While Berinsky said he takes that issue seriously, he has also been involved in a research project, along with Professor David Rand of the MIT Sloan School of Management, to evaluate if AI tools can actually help counter misinformation.
On the AI front, “It’s not all bad, but we want to be very cautious in thinking about how we deploy this,” Berinsky said.
Overall, Berinsky emphasized, that project is an example of pursuing a whole suite of solutions to fight back against misinformation. Rather than feel dejected about the lack of a simple fix, he observed, people should explore “the possibility of having different kinds of solutions that might come together.”
The event was sponsored by the offices of the Provost and Chancellor, and the Institute Community and Equity Office.