The AI elections fracas is well underway

How the next wave of technology is upending the global economy and its power structures
May 21, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Illustration by Rob Dobi for POLITICO

Illustration by Rob Dobi for POLITICO

As a massive election year — and the first to feature powerful AI tools — draws into focus, so do the numerous competing interests that are struggling to gain an advantage.

In the last installment of the “Bots and Ballots” series from POLITICO’s Mark Scott, he examines the countries where AI is already wreaking havoc on elections, the electioneers who want to use it more responsibly and how AI developers themselves are pitching their safety programs to governments and the public.

Each party, as Mark writes, has its work cut out for it.

“Where I believe we’re heading… is a 'post-post-truth' era, where people will think everything, and I mean everything, is made up, especially online,” he writes. “Think 'fake news,' but turned up to 11, where not even the most seemingly authentic content can be presumed to be 100 percent true.”

That’s already the case in the developing and middle-income nations among the world’s so-called “Global Majority,” which Mark writes have seen the technology spread like wildfire through election infrastructure with very little oversight. He spoke with Janet Love, vice chairperson of South Africa’s Independent Election Commission, who described how her agency feels unprepared for an onslaught of AI-powered deepfakes.

"All of us are feeling a huge need for greater capacity and expertise," she told Mark. "The difference between having appropriate measures in place, and capacities to implement those measures — you feel it all the time.”

Pakistan’s former prime minister Imran Khan caused a veritable political earthquake in that country’s recent elections, largely driven by his use of generative AI to “give speeches”… from prison. An Argentinean analyst reported to Mark that her team has seen a cascade of AI-assisted false information about Latin American elections this year, especially audio. The cumulative effect, he writes, is a breakdown of civic trust.

Then there’s the more upbeat version. Some election professionals want to put a bit of a happier, more responsible face on the use of AI for their trade. Mark spoke with Hannah O’Rourke, the London-based cofounder of the nonprofit Campaign Lab, about her work to create what they call “the future of influence” via AI.

"There are some interesting, creative AI solutions that can help humans do better at things like campaigning,” she told Mark, “So the question is: How do we, as people who want to do good things, use this tool in a way that is in accordance with what we think is right?"

In practice, that’s not so clear yet — largely because the overlap between AI engineers and wanna-be AI electioneers is usually quite small, making it hard for the latter to sift the good from the bad. It also simply might not be profitable, the cardinal sin for ambitious AI engineers: One Lithuanian AI marketing entrepreneur reported no bites for her company’s election offerings, and the entrepreneur Oren Etzioni said his deepfake detecting tool is “by design — now and in the future — a money-losing proposition.”

So that means the key players are the tech giants and well-funded startups, already locked into their own tense, ongoing standoff with world governments about the safe implementation of their products. Seeking to better understand how companies like Amazon, Meta, and OpenAI are positioning themselves as partners in a healthy civic sphere, Mark and POLITICO’s Hanne Cokelaere teamed up to analyze the language in those companies’ public statements, tech specifications and terms and conditions, plucking out the core message each is trying to convey.

Alphabet, Google’s parent company, is trying to be everything to everyone, “promoting AI safety to policymakers, inclusivity to civil society groups, and innovation and research to potential customers.” Amazon is promoting its use of AI to power its enterprise products. Meta is touting its open-source philosophy. OpenAI is technically wonky and safety-minded. But strangely, as a huge slate of elections unfold alongside the disruption wrought by their products, Mark and Hanne report that “voting protection and integrity did not play a central role in companies’ public statements.”

What, then, is a defenseless voter to do? Mark concludes in a column wrapping up the series that while AI developers, governments, civil society groups and ne’er-do-wells across the globe continue their familiar cat-and-mouse-and-cat-and-mouse game, the onus will fall on voters to harness their inner skeptic and avoid simply sleepwalking into his imagined “post-post-truth” dystopia.

“In such a world, it’s only rational to not have faith in anything,” Mark writes, but “The positive is that we’re not there yet.

“Yes, AI-fueled disinformation is upon us. But no, it’s not an existential threat, and it must be viewed as part of a wider world of ‘old-school’ campaigning and, in some cases, foreign interference and cyberattacks… for this year’s election cycle, your best bet is to remain vigilant, without getting caught up in the hype-train that artificial intelligence has become.”

 

THE GOLD STANDARD OF TECHNOLOGY POLICY REPORTING & INTELLIGENCE: POLITICO has more than 500 journalists delivering unrivaled reporting and illuminating the policy and regulatory landscape for those who need to know what’s next. Throughout the election and the legislative and regulatory pushes that will follow, POLITICO Pro is indispensable to those who need to make informed decisions fast. The Pro platform dives deeper into critical and quickly evolving sectors and industries, like technology, equipping policymakers and those who shape legislation and regulation with essential news and intelligence from the world’s best politics and policy journalists.

Our newsroom is deeper, more experienced, and better sourced than any other. Our technology reporting team—including Brendan Bordelon, Josh Sisco and John Hendel—is embedded with the market-moving legislative committees and agencies in Washington and across states, delivering unparalleled coverage of technology policy and its impact across industries. We bring subscribers inside the conversations that determine policy outcomes and the future of industries, providing insight that cannot be found anywhere else. Get the premier news and policy intelligence service, SUBSCRIBE TO POLITICO PRO TODAY.

 
 
'her' story

A rough week of PR for OpenAI continued with one of the world’s most popular actresses accusing it of, quite literally, stealing her voice.

Scarlett Johansson, star of Marvel films, AI fantasia “Her,” and (most importantly) “Lost In Translation,” released a statement yesterday afternoon detailing ongoing negotiations in which OpenAI’s Sam Altman approached her to voice ChatGPT’s voice capabilities, in her retelling bridging “the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.”

Long story short, she said no, more than once. So it might have came as a bit of a surprise when the company debuted its GPT-4o powered voice assistant last week bearing a remarkably similar voice to hers — and when, with a California grapefruit-sized dollop of chutzpah, Altman accompanied its release by simply tweeting “her.”

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said in a statement, adding that she had hired legal counsel and that OpenAI has already agreed to replace the voice.

OpenAI defended itself in its own statement, writing that the bot’s “voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.” Still, the debate over AI companies’ right to take and repurpose the whole of human experience for use in their products without consent is not likely to abate: entrepreneur and AI researcher Gary Marcus wrote in his own newsletter this morning that “All of this is really about consent… Spin is a way of life at OpenAI; telling the truth is not.”

a different 'roadmap'

Civil society groups have teamed up to trash the Senate’s AI “roadmap” published last week — and also to propose some alternatives.

A group that includes the Electronic Frontier Foundation, Georgetown Center on Privacy and Technology, AI Now Institute chief advisor Meredith Whittaker, a researcher from Data & Society and many more wrote in a “Shadow Report to the US Senate AI Policy Roadmap” that a “bespoke, industry-driven process unsurprisingly led the Senate to an industry-friendly destination,” and that the roadmap is “a massive commitment of taxpayer money to ‘innovation’ without a vision for how this innovation will serve the public.”

The report then recommends a series of policy areas that legislators should shift their focus to as a “floor,” not a “ceiling” for ongoing AI policymaking — including racial justice and equity, immigration, accountability, labor, privacy, safeguarding democracy, and several more.

“Lobbying dollars, campaign contributions and revolving door relationships have had a

material impact on federal policy related to big tech and artificial intelligence,” the authors conclude. “We compile the evidence here as a glimpse of what could have been… Congress must do more than namecheck the issues and offer token participation to those affected by AI.”

Tweet of the Day

this PC Computing  magazine "Roadmap to Top Online Services" poster adorned my office wall 25 years ago. I took a picture of it before I threw it in the trash after absolutely everything on it ceased to exist or became irrelevant. But for a time, these companies or services were through to be untouchable tech giants.

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

LISTEN TO POLITICO'S ENERGY PODCAST: Check out our daily five-minute brief on the latest energy and environmental politics and policy news. Don't miss out on the must-know stories, candid insights, and analysis from POLITICO's energy team. Listen today.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post