What Gavin Newsom just did to the global AI debate

Presented by Instagram: How the next wave of technology is upending the global economy and its power structures
Oct 02, 2024 View in browser
 
POLITICO Digital Future Daily Newsletter Header

By Derek Robertson

Presented by 

Instagram

California Gov. Gavin Newsom speaks during a news conference.

California's Gov. Gavin Newsom. | Damian Dovarganes/AP

As quickly as big tech companies are developing the tools they say will change the world, their own world is shifting under their feet.

A series of minor earthquakes in the artificial intelligence landscape in recent weeks have shown that when it comes to defining AI “safety” — that is, exactly what risks AI poses in the first place, and how they should be prioritized — hardly anything is settled, and industry now appears to be winning an influence race in which well-funded intellectuals had jumped out to an early lead.

On Sunday, California’s Gov. Gavin Newsom vetoed his state’s SB 1047, which would have put restrictions on the most powerful AI models and made companies liable for “catastrophic” harms they might have unleashed — striking a blow against worried AI-watchers (like Elon Musk) who believe the technology’s world-wrecking potential means its development should be paused until worst-case scenarios can be ruled out.

That was just the latest in a series of public setbacks for those who believe “artificial general intelligence,” an AI system as intelligent or more intelligent than a human, is right around the corner and needs to be guarded against.

Last week a wave of departures from OpenAI shifted the company further away from its initial safety-centric mission — and suggested to some observers that a renewed skepticism might be warranted about OpenAI’s claims about developing AGI. And Meta’s Nick Clegg recently tore into former U.K. Prime Minister Rishi Sunak for his perceived excessive worry about the existential risk posed by powerful AI.

“Vetoing SB 1047 is a major political setback the AI safety community, not because it would have done a lot for safety in itself, but because it did so relatively little and yet still failed to get over the line,” said Samuel Hammond, economist at the right-leaning tech think tank Foundation for American Innovation.

Hammond, a big believer in AI’s revolutionary potential, lamented the veto as potentially opening the door to another, harsher round of regulation more akin to the European Union’s far-reaching AI Act. As Newsom suggested in his veto message for SB 1047, the next AI bill that comes out of California is likely to address a wider group of specialized uses for the technology. That favors a vision of AI safety that’s more concerned with what the technology could do here and now with regard to privacy, copyright and labor issues — rather than the theoretical catastrophic risks worried about by figures like Musk, or thinkers at the Future of Life Institute, whose March 2023 letter calling for an AI “pause” sent shockwaves through the AI community.

That would be another unwelcome blow to the effective altruism community, a group of largely Silicon Valley-associated thinkers who advocate for a utilitarian, data-driven approach to humanitarian and social policy, and who have made steady inroads in Washington. While the original version of effective altruism amounts to doing more rigorous research about charitable giving, or purchasing mosquito nets en masse, maybe the most high-profile branch of the movement has been those concerned with “longtermism,” or taking actions that will maximally ensure humanity’s welfare in the future. In recent years, altruists have become extremely focused on the long-term risks of AI.

As global regulators’ eyes turn back toward the more here-and-now potential consequences of AI, and AI-watchers get impatient for the arrival of GPT-5, never mind a “machine god,” the movement risks losing ground to a more earthbound view of how humans should regulate AI.

EA is “certainly on a back foot, but nothing it can’t recover from,” said FAI’s Hammond. (He personally thinks that AGI skeptics are being misled by the limitations of current computer hardware — and that in fact we’re well on track for the development of an exciting, but potentially dangerous superintelligence.)

Jason Green-Lowe, executive director at the EA-associated Center for AI Policy nonprofit, told POLITICO’s Jeremy B. White and Lara Korte Monday that SB 1047 would have “bought us some time and showed the way forward,” and lamented that “there’s even more need for Congress and the next president to take a stand on AI safety.”

But now that the biggest bill addressing AI from America’s de facto tech regulator (the state of California) is dead, the hard, messy progress of figuring out how, where, and why to tackle AI could start all over again, with no clear advantage for any party in the debate — except, of course, the AI companies themselves.

 

A message from Instagram:

Introducing Instagram Teen Accounts: a new experience for teens, guided by parents.

Instagram is launching Teen Accounts, with built-in protections limiting who can contact teens and the content they can see. Plus, only parents can approve safety setting changes for teens under 16.

So parents can have more peace of mind when it comes to protecting their teens.

Learn more.

 
eu vs. the apps

The European Commission is demanding social media companies prove that their algorithms aren’t harming their users.

POLITICO’s Clothilde Goujard reported for Pro subscribers on the request for information from the Commission, which asked Tiktok, YouTube and Snapchat to explain how their content-recommendation algorithms don’t violate the parts of the Digital Services Act that aim to limit the spread of false information and negative mental health outcomes.

The request is the first step toward an official investigation, which could lead to huge fines of up to 6 percent of the companies’ annual global revenue. Meta and TikTok are already the subject of separate formal investigations by the EU for their algorithms.

place your bets

Coming soon to elections near you (and starting with this one): Gambling!

POLITICO’s Declan Harty reported for Pro subscribers that a federal appeals court lifted a freeze on political betting today, allowing financial exchange startup Kalshi to re-open its shuttered election-betting markets, the first fully regulated ones in the United States. Currently Kalshi allows only betting on which party will control Congress, but it says it plans to expand to the presidential election and more.

Kalshi's operations are regulated by the Commodity Futures Trading Commission, which slowed the company's progress toward legal betting for years. As Declan writes, the CFTC remains critical of election-betting markets, with its Chair Rostin Behnam warning that it will force his agency to play “election cop.”

Still, Judge Patricia Millett of the U.S. Court of Appeals for the District of Columbia Circuit said despite the perverse incentives such a market might create, it’s not against the law: “Ensuring the integrity of elections and avoiding improper interference and misinformation are undoubtedly paramount public interests, and a substantiated risk of distorting the electoral process would amount to irreparable harm… the problem is that the Commission has given this court no concrete basis to conclude that event contracts would likely be a vehicle for such harms.”

 

A message from Instagram:

Advertisement Image

 
TWEET OF THE DAY

everyone agrees: war is bad.  that is why mankind invented the Special Military Operation

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Instagram:

Introducing Instagram Teen Accounts with automatic protections for teens.

Instagram is launching Teen Accounts, with built-in protections limiting who can contact teens and the content they can see. Plus, only parents can approve safety setting changes for teens under 16.

This means parents can have more peace of mind when it comes to protecting their teens.

Learn more.

 
 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post