APOCALYPSE WHEN — Artificial Intelligence will soon be, if it’s not already, better than humans at detecting disinformation, whether it’s about war, or health, or climate or elections. But AI will also be, if it’s not already, better than humans at creating disinformation. This quandary has some experts foreseeing an AI misinformation apocalypse, while others think the threat is overblown. Darren Linvill, co-director of the Media Forensic Hub at Clemson University, thinks the future is somewhere in between. He thinks of it as a speeded up, more automated version of information wars that have gone on for decades. “It’s still the same fundamental tension,” he said. “Bad guys are going to use computers to do their job…Good guys are going to use computers to try to counter the bad guys.” That doesn’t mean that AI won’t make things worse, or that disinformation isn’t pernicious — or that we’ve figured out how to prevent people from falling down rabbit holes of disinformation (or climb back out if they’ve already fallen). But AI, in Linvill’s view, doesn’t make the struggle over what is true a brand new challenge. AI will mean a “leap” in what’s possible on the dissemination of disinformation front. But Linvill said that doesn’t necessarily mean the distance between the bad guys and the good guys will become insurmountable. It’s sort of like a race where the cars are faster and more powerful than in the old days, but both sides have jazzy cars. So far, the disinformation that we’re already awash in isn’t primarily AI generated. We already have viral memes, deep fakes, and all sorts of harmful disinformation online. But AI certainly contributes to that; NewsGuard, a nonprofit which monitors disinformation, has identified hundreds of “unreliable” AI-generated news sites and many false narratives. What Linvill is focused on in the dawning AI age is the pace of creating, spreading and — on the good guy side, identifying — disinformation. And that pace has changed a lot. One classic disinformation episode, Operation INFEKTION, which he has studied, festered for years before it was uncovered, It was a 1983 report in an obscure publication in India called The Patriot, which said — falsely — that the virus that caused AIDS was a bioweapon created at Fort Detrick in Maryland. The Soviets’ KGB had created the Patriot some years earlier, and waited for the right moment to use it. The lie spread from the obscure publication to the fringe to more credible sources — a process called “narrative laundering.” But the origins of INFEKTION were not uncovered for years, until archives were opened after the collapse of the USSR. While AI can speed that up, it’s not wholly AI-dependent. Disinformation can already fly, if not at the speed of light, certainly at the speed of a post on X. “From the inception of social media, and attempts to moderate social media, the same fundamental tension existed that people used automated techniques to try to manipulate that system.” The sites, to various extents, also relied on automation to control it. “ AI is “just the next step in that automation,” he said. But right now, a scary image of a war zone online is more likely to be lifted from a video game than generated by AI and the fake “eyewitness” account on YouTube of some scandal isn’t necessarily AI produced either. AI may be part of it — but it’s not the full blossoming of the science fiction-ish AI nightmare scenarios that people worry about, he said. The reason people believe these false narratives even after they are debunked, whether created via AI or older methods, isn’t because of the technology. It’s because “within the communities that they were intended to bounce around, people wanted to believe them.” He does think AI generated disinformation will cause harm — but that harm won’t be equally distributed. His center has done research on older adults’ susceptibility to fraud and disinformation in the digital age. “My kids — they’ll probably be fine, because they’re going to understand it and be able to navigate it.,” he said. But his own generation — not so much. “We’re always creating new realities. It’s always going to be hard for somebody — in this case,” he said, “it’s probably going to be my generation.” Welcome to POLITICO Nightly. Reach out with news, tips and ideas at nightly@politico.com. Or contact tonight’s author on X (formerly known as Twitter) at @JoanneKenen.
|