As a massive election year — and the first to feature powerful AI tools — draws into focus, so do the numerous competing interests that are struggling to gain an advantage. In the last installment of the “Bots and Ballots” series from POLITICO’s Mark Scott, he examines the countries where AI is already wreaking havoc on elections, the electioneers who want to use it more responsibly and how AI developers themselves are pitching their safety programs to governments and the public. Each party, as Mark writes, has its work cut out for it. “Where I believe we’re heading… is a 'post-post-truth' era, where people will think everything, and I mean everything, is made up, especially online,” he writes. “Think 'fake news,' but turned up to 11, where not even the most seemingly authentic content can be presumed to be 100 percent true.” That’s already the case in the developing and middle-income nations among the world’s so-called “Global Majority,” which Mark writes have seen the technology spread like wildfire through election infrastructure with very little oversight. He spoke with Janet Love, vice chairperson of South Africa’s Independent Election Commission, who described how her agency feels unprepared for an onslaught of AI-powered deepfakes. "All of us are feeling a huge need for greater capacity and expertise," she told Mark. "The difference between having appropriate measures in place, and capacities to implement those measures — you feel it all the time.” Pakistan’s former prime minister Imran Khan caused a veritable political earthquake in that country’s recent elections, largely driven by his use of generative AI to “give speeches”… from prison. An Argentinean analyst reported to Mark that her team has seen a cascade of AI-assisted false information about Latin American elections this year, especially audio. The cumulative effect, he writes, is a breakdown of civic trust. Then there’s the more upbeat version. Some election professionals want to put a bit of a happier, more responsible face on the use of AI for their trade. Mark spoke with Hannah O’Rourke, the London-based cofounder of the nonprofit Campaign Lab, about her work to create what they call “the future of influence” via AI. "There are some interesting, creative AI solutions that can help humans do better at things like campaigning,” she told Mark, “So the question is: How do we, as people who want to do good things, use this tool in a way that is in accordance with what we think is right?" In practice, that’s not so clear yet — largely because the overlap between AI engineers and wanna-be AI electioneers is usually quite small, making it hard for the latter to sift the good from the bad. It also simply might not be profitable, the cardinal sin for ambitious AI engineers: One Lithuanian AI marketing entrepreneur reported no bites for her company’s election offerings, and the entrepreneur Oren Etzioni said his deepfake detecting tool is “by design — now and in the future — a money-losing proposition.” So that means the key players are the tech giants and well-funded startups, already locked into their own tense, ongoing standoff with world governments about the safe implementation of their products. Seeking to better understand how companies like Amazon, Meta, and OpenAI are positioning themselves as partners in a healthy civic sphere, Mark and POLITICO’s Hanne Cokelaere teamed up to analyze the language in those companies’ public statements, tech specifications and terms and conditions, plucking out the core message each is trying to convey. Alphabet, Google’s parent company, is trying to be everything to everyone, “promoting AI safety to policymakers, inclusivity to civil society groups, and innovation and research to potential customers.” Amazon is promoting its use of AI to power its enterprise products. Meta is touting its open-source philosophy. OpenAI is technically wonky and safety-minded. But strangely, as a huge slate of elections unfold alongside the disruption wrought by their products, Mark and Hanne report that “voting protection and integrity did not play a central role in companies’ public statements.” What, then, is a defenseless voter to do? Mark concludes in a column wrapping up the series that while AI developers, governments, civil society groups and ne’er-do-wells across the globe continue their familiar cat-and-mouse-and-cat-and-mouse game, the onus will fall on voters to harness their inner skeptic and avoid simply sleepwalking into his imagined “post-post-truth” dystopia. “In such a world, it’s only rational to not have faith in anything,” Mark writes, but “The positive is that we’re not there yet. “Yes, AI-fueled disinformation is upon us. But no, it’s not an existential threat, and it must be viewed as part of a wider world of ‘old-school’ campaigning and, in some cases, foreign interference and cyberattacks… for this year’s election cycle, your best bet is to remain vigilant, without getting caught up in the hype-train that artificial intelligence has become.”
|