When Barack Obama mocked Donald Trump’s “weird obsession with crowd sizes” (complete with a suggestive hand gesture) at the Democratic National Convention this week, the audience went wild, and the internet made the moment go viral. But on Trump’s end, something else was also going on when he made his (false) assertions that Kamala Harris used artificial intelligence to inflate the crowd sizes in images of her rallies. AI might not have blown up politics the way some people worried it was going to. But Trump has seized on the new technology in a typically out-of-the-box way: As an easy way to sow doubt about basic facts. “When he's denying these images of Harris' crowds, you could think, ‘Wow, it's just pettiness,” said Hany Farid, a professor at the University of California, Berkeley who focuses on media forensics. “But I think it's worse than that, because I think he's setting the stage for denying the election.” (Farid is not the only one who thinks this.) The crowd-size AI accusation had a typically inventive, Trumpian, off-the-cuff quality. More often, however, his campaign has used AI to bend facts more directly. In just the past few weeks, Trump posted an AI-generated image of Harris commanding a massive crowd under a communist banner on his X account and — most famously — reposted a collage of partly AI-generated images depicting support from Taylor Swift fans. It’s been argued that all this stuff is more like meme-generation than a genuine attempt to distort the record. Even so, it has a lot of power in the hands of a figure like Trump, who has long blurred truth and fiction in a way that gets dizzying to untangle. “Fundamentally, is he doing something different? No, he's lying, but now it's supercharged with visual evidence,” said Farid. The new AI-driven dimension to Trump’s tactics has pushed to the fore the national conversation regarding the ethics of using AI to generate political content. Things are already getting weird online. On X, dozens of people responded to Trump’s post of an AI-generated Harris under a communist banner with their own AI-generated images, both supporting and disparaging the former President. In the comment section of a recent LinkedIn post Farid made saying images in the “Swifties for Trump” collage were AI-generated, one person commented: “Are you saying that an AI-generated image cannot depict truth? What if the words used to generate it are truthful?” In that kind of void, some kind of ethics standards might help — but on that front, neither the Democratic nor the Republican National Committees have issued guidelines on how political candidates should or shouldn’t use AI (though Democrats did hope to craft an agreement at one point). Trump’s use of AI as a campaign tool also puts an unflattering spotlight on the complete lack of federal laws or regulations on AI-generated political content. Proposals to put guardrails on such content have gathered steam in Congress, at the Federal Communications Commission and at the Federal Elections Commission — but they all face stiff Republican pushback In Congress, Sen. Amy Klobuchar (D-Minn.) introduced two bills to address voter-facing AI-generation election content, one to ban deepfakes of candidates, and the other to require disclosures on AI-manipulated political ads. Republicans on the Senate Rules Committee voted against both, but a Democratic majority advanced the bills out of committee in May anyway. They then failed a unanimous consent vote on the Senate floor in July and are still waiting for another go at a full Senate vote. Meanwhile, the FCC is proposing disclosure rules for AI use in political ads, while the FEC is seeking comments on a petition to amend their rules to ban deliberately deceptive AI use in campaign ads. Under the hood, FEC and FCC commissioners are arguing over whose job it is to regulate this space. Today, the FCC pushed back its comments deadline for the agency's proposed rules, making it even more unlikely for any to be issued before the elections. It’s true that enforcing bans on certain uses of AI in political speech could get complicated quickly, both practically and constitutionally, but Farid says enforcing disclosure should be relatively easy if regulators target the right chokepoint. Asking AI companies to build guardrails into their models isn’t the way, Farid said. “I can’t stop people from making fake shit. Open AI, Midjourney, Firefly, they did a pretty good job of putting guardrails, but then along comes Flux and Grok, and you’re like, ‘Alright, somebody's gonna create an unhinged generative AI. There's nothing I can do about that.’” And the wide availability of open source foundation models makes it even harder to regulate who has access to which AI tools, he added. Asking social media platforms to enforce disclosure rules might make a difference: “The problem is not the AI, it's social media,” he said. “It's the democratization of the distribution channels that makes this so dangerous.” “The choke point is the distribution channels, because we have monopolies. X, Facebook, Instagram, Tiktok, YouTube — that's 90% of the ball game,” Farid said. But social media platform leaders have proven to be largely untouchable in Washington even on flashpoint issues like kids’ online safety, despite overt frustration from lawmakers. Given the success of the tech industry’s massive lobbying apparatus so far, mandating that social media companies do anything — including enforcing disclosure rules on AI-generated content — is likely to be an uphill battle.
|