Meta’s decision to close down its US fact-checking operations marks the end of an eight-year era bookended by Trump presidencies. And though it has caused a predictable wave of panic in liberal-leaning tech-accountability circles, the decision is also forcing a reconsideration among experts about what the real risks of online mis- and dis-information are — and how best to handle them. Political disinformation seemed like a new and serious disease of the social media age when it first hit the public radar almost a decade ago, a problem that could be fixed by algorithm tweaks and more rigorous fact-checking. It has turned out to be more slippery, subtler and more difficult to root out. “The situation is more complex than some are willing to let on,” said Felix Simon, communication researcher and research fellow in AI and Digital News at the Reuters Institute for the Study of Journalism. “While there is evidence that fact-checking works, the effects were often small and fleeting,” he said, “and least effective for those most likely to see and believe false or misleading information.” Meta’s fact-checking operation has its roots in the panic over disinformation that exploded in the wake of the 2016 election. Much of the worry surrounded deliberate Russian efforts to monkeywrench Western politics by sowing discord online. A vibrant cottage industry — dubbed “Big Disinfo” — sprang up to fight back. NGOs poured money into groups pledging to defend democracy against merchants of mistruth, while fact-checking operations promised to patrol the boundaries of reality. At first, it seemed like a largely technical exercise. The mission statement of the Stanford Internet Observatory, one of the newly created bodies, highlighted “the negative impact of technology” and promised to study “the abuse of the internet.” Not everyone was convinced of the scale of the threat, however. Tech CEOs started out skeptical: In the days after the 2016 election, Facebook CEO Mark Zuckerberg said there was “a profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw fake news.” Under pressure, after congressional hearings and exposes, his company agreed to new policies and outside oversight. Eight years later, after Trump’s decisive second victory, Zuckerberg’s earlier view is newly resonant. In part, this is because researchers themselves say there isn’t convincing evidence for the idea that misinformation sways voters. Research instead has shown that consumers of misinformation tended to be those who were highly motivated and already conspiratorially inclined, with most of us surprisingly resilient to far-fetched and unfounded notions. This, combined with difficulties connecting exposure to misinfo to subsequent political beliefs or behaviors, has prompted a “revisionist view” in the field that “maybe [misinfo] isn’t the biggest danger we’re facing,” Matthew Baum, professor of global communications at Harvard University, told POLITICO. For a recent article, I spoke to Baum and a number of other researchers, and found them surprisingly open about the idea that disinformation is not the bugbear that it seemed a decade ago. “I’ve been thinking about this a lot lately … about how the frame of disinformation has failed us and what we can do differently,” said Alice Marwick, director of research at Data & Society, a nonprofit research institute. Reece Peck, associate professor of journalism and political communication at the City University of New York, said, “The current emphasis on algorithms and tech moguls like Zuckerberg and Musk often obscures a key reality: the most effective online political content draws heavily on narrative techniques and performative styles pioneered by Limbaugh, Fox, and Drudge.” That’s not to say that experts would cheer the end of fact-checking, which Zuckerberg himself acknowldged “means we’re going to catch less bad stuff.” “While the Community Notes approach pioneered by X has shown promise, it is not a like-for-like replacement, nor is it intended to be,” said Rasmus Nielsen, professor at the Department of Communication of the University of Copenhagen. So what’s next? Eight years into the grand online misinformation fighting experiment, some figures show content moderation may have increased polarization instead of building bridges. Trust in the media among Republicans hovered around 30% before 2016, but plummeted post-Trump. Last year it was 12%. The Stanford Internet Observatory, which conducted high profile work on election-related misinformation, closed after being targeted by lawsuits and subpoenas from congressional Republicans. Marwick at Data & Society said researchers should move past the idea of countering binary “units of fact,” and instead look at how age-old false narratives – for example about immigrant criminality – are cynically wielded. Peck offered as an example the podcaster Joe Rogan, who dismissed the COVID vaccine and whose claims that U.S. intelligence agencies helped orchestrate the Jan. 6, 2021 attack on the U.S. Capitol were dubbed “dumb and irresponsible” on the Poynter site. Rogan’s endorsement of Trum p this election cycle carried huge weight among his young male listenership. But the idea that Rogan "is giving people bad science, and if we gave people good science, we could defeat him … that’s kind of misplacing where Joe Rogan gains his cultural authority — where the trust is between him and his audience,” Peck said. Instead, he advocates going beyond the binary of true and false, and taking a more humanistic view of why certain messages resonate. “Communication scholars must reject the notion that we can overcome the dynamics of ‘culture war’ politics through fact-checking or platform reforms,” said Peck.
|