Artificial intelligence was supposed to upend this election, with convincing fake news content flummoxing and misleading voters as they cast their all-important ballots. It still might. But with a little less than a month before Election Day, it’s hurricane news that most acutely highlights the information crisis the world is facing. An AI-generated image of a little girl fleeing Hurricane Helene while clutching a puppy went viral in right-wing circles last week, with Sen. Mike Lee (R-Utah) posting it on (and later deleting it from) X. But in a sea of defensive statements and apologies, one comment stood out above all others in revealing the inconvenient truth at the heart of digital “misinformation.” “Y’all, I don’t know where this photo came from and honestly, it doesn’t matter,” wrote Amy Kremer, Republican National Committeewoman for the Georgia GOP and co-founder of Women for Trump. “I’m leaving it because it is emblematic of the trauma and pain people are living through right now.” In other words: What matters isn’t whether something is true, just whether it feels true. This is not just an AI problem. JD Vance recently doubled down on false, outlandish immigrant-baiting stories by saying it was necessary “to create stories so that … media actually pays attention to the suffering of the American people.” For anyone worried about the rise of online misinformation, it’s a sobering thought: After the millions of dollars and hours spent researching, classifying, quarantining, and attempting to inoculate people against false information online, it turns out that a huge swath of the population might just want it anyway. In a polemic raging against the “‘Era of AI-Generated Slop,” 404 Media co-founder Jason Koebler declared “we live in an era where the truth essentially does not matter, at least in terms of social media virality, and where the truth is often an actual hindrance in conveying whatever might be best for your side politically.” The overwhelming spread of fake, potentially life-endangering content online in the wake of Helene, with another catastrophic hurricane on the horizon as soon as tonight, illustrates his point. False information has always flourished in the aftermath of natural disasters. Hurricane Katrina was rife with examples, well before the social-media era. But never has it enjoyed its current reach, scale or speed. One X post claiming that “FEMA is actively hindering relief efforts” has more than 16 million views as of this writing. Rep. Marjorie Taylor Greene (R-Ga.) made headlines for claiming that an ambiguous “they” were controlling the weather. X’s rules and policies state “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” but the rule’s application appears to be inconsistent at best: The New York Times reported earlier this year that its “community notes” feature, which has largely taken the place of active content moderation, has been woefully insufficient. POLITICO reported Tuesday that Federal Emergency Management Agency Administrator Deanne Criswell said the conspiracy swirl around Hurricane Helene has been “the worst [she has] ever seen,” and in an accompanying story, the blame for much of its spread was placed squarely on one person: Elon Musk, owner and poster-in-chief of X. Musk’s feed has been a nonstop parade of hallucinatory conspiracy about the hurricanes — signal-boosting notorious huckster Mario Nawfal’s garbled take on FEMA funding; spreading rumors that FEMA “actively blocked” donations for hurricane relief; writing that “FEMA used up its budget ferrying illegals into the country instead of saving American lives. Treason.” For natural disasters, X has become an institutional information hazard. Its current version of the “blue checkmark” credentialing system — which once required the company to verify posters’ identity, and is now available to anyone willing to pay $7 a month — offers no differentiation between trusted government sources and accounts telling people directly to ignore FEMA orders, or claiming that the United States Marines are engaged in open combat with FEMA representatives. (Many emergency offices across the country use the platform for official automated alerts, muddying the waters even further.) Despite posting dozens of times in the past 24 hours on X, Musk has remained silent about the rampant spread of fake information, AI-generated or otherwise, on his platform. He did, however, post early Tuesday morning in response to a complaint about the presidential election: “If all [people] read is the legacy media propaganda, they have no idea what’s actually going on. Very important to send them links to X posts, so they can learn the truth.” “After the 2016 elections, when the platforms got serious about addressing foreign election interference, we used to red team all kinds of wild scenarios,” said Nu Wexler, who was Twitter’s global communications director from 2013 to 2017 and after that served in similar roles with Facebook and Google. “But we never considered one where the owner of a platform would use their personal account to push objectively false information, to help a candidate they’re actively supporting.” Musk’s pride in the information environment he’s created reflects the fundamentally different vision for the internet that he holds — even in comparison to other leading social media companies like Meta or Snapchat, which at least try to police or moderate misleading content. Musk’s version of this is the “community notes” feature, which while lacking teeth reflects his commitment to the libertarian, anything-goes ethos of the early internet. But this isn’t the early internet, when access to instant global self-publishing was limited to (and read only by) a small community of devoted nerds. As Musk likes to crow, X is the “global town square” where many people look first for news or chatter about their communities. As foreshadowed by the cesspool the platform became following this year’s first assassination attempt on former President Donald Trump, when something important happens, the rails tend to come off quickly. Which returns those who care about this state of affairs to the attitude of Kremer, Vance, and legions of lesser posters across the political spectrum, who flock to produce and consume content they know is false in order to access the catharsis and fellow-feeling they crave. Platforms can spend fortunes to filter, limit, or remove fake content. Legislators can try to contain it with laws, even if they fail to pass them. But they can’t change what people actually crave, and what they want to share, even when confronted with the truth — something that Musk, as both the world’s leading salesman and avid consumer of life-threatening, inflammatory slop, understands all too well.
|