THE BUZZ: WHAT’S HE THINKING? — The California legislature has demonstrated a clear desire to put guardrails on artificial intelligence. How does Gov. Gavin Newsom feel about it? ¯\_(ツ)_/¯ Some of the nation’s most ambitious AI policies — including pre-deployment testing for the biggest models, preventing it from replacing certain jobs and stopping the technology from mimicking actors without their consent — could land on the governor’s desk in a matter of weeks. Colorado has already taken the lead on this front, with Democratic Gov. Jared Polis signing two major AI proposals into law last week, despite heavy opposition. Connecticut Gov. Ned Lamont took a different tack, quickly shutting down efforts from his own legislature with the threat of a veto. Newsom, meanwhile, has stayed mum on the dozens of bills that have appeared in print this session. The governor, famously, doesn’t comment on pending legislation (except when he does), and has kept quiet on some of this year’s hottest legislation. He’d have good reason for not getting ahead of himself: The fights over AI involve the state's biggest political and economic power players, including Silicon Valley, Hollywood and labor. That could change today, when Newsom takes the stage in San Francisco during a joint summit hosted by his office, UC Berkeley and Stanford. We’re so eager to hear his take on such a politically fraught issue that we came up with some questions of our own. Here’s what we’d ask if we had the mic: What kind of AI regulation is most important for California to tackle first, even with the state's budget constraints? This year’s multibillion-dollar budget deficit means even bills that the governor might agree with could be put on hold until the coffers are a bit more full. Many proposals — notably, state Sen. Scott Wiener’s massive AI framework — would require money to enforce or create new state agencies that need staffing and resources. But the concerns posed by AI are very real, and we’re already seeing the impact of deepfakes and unauthorized digital replications. We’d like to know what the governor sees as the most urgent need for the state to tackle this year. How does California find the right balance between encouraging innovation and protecting the public — and jobs — from negative impacts? The constant refrain we’re hearing from Silicon Valley is that overregulation will squash the potential benefits of artificial intelligence. But many opponents, from equity groups to labor unions, warn that an unchecked industry could cause irreparable damage similar to what happened with social media. There’s also the worry about robots taking our jobs. Several lawmakers have introduced proposals to insulate certain employees from AI, including teachers, call center workers, truck drivers and grocery store workers. But Newsom at his May budget presentation said AI was “not a job killer.” He also said it could help the state become more efficient and cut costs. Newsom, a son of San Francisco, boasts relationships with some of Silicon Valley’s most powerful players, and often praises the industry as a powerhouse for California’s economy. But the governor does not always side with tech, and has occasionally shown a willingness to push back, like when he signed the landmark Age Appropriate Design Code and subsequently chastised the CEO of industry group NetChoice for suing to block it. Where does he draw the line with artificial intelligence? What do you think is the biggest risk with artificial intelligence? For some, the threats posed by AI go a lot further than copying Scarlett Johannson’s voice without her consent. There is growing concern among some tech experts that powerful artificial intelligence models could pose huge risks in the hands of bad actors — leading to global catastrophe, or even human extinction. It’s part of the reason Wiener wants to require large-scale models to undergo safety testing before they deploy. On the other end of the spectrum, those like venture capitalist Marc Andreesen believe AI could bring life-altering benefits to humanity. Does Newsom give any credence to either side? Is he in the middle? What is the state doing to keep pace with the private sector? We can’t stop hearing about how fast AI is developing, and we want to know if California can keep up. Canada, for example, just spent $2.4 billion on its own AI development and research efforts. The United Kingdom is pursuing a similar endeavor. The governor last year released guidance directing state agencies to find the best uses for AI, and this month announced efforts to study how the technology can manage traffic, help business owners do their taxes, and connect non-English speakers with health benefits. But tech equity advocates and some lawmakers don’t want the private sector to have complete control over the technology. They think that the state should invest in publicly-funded research hubs and university programs to help shape AI development in a responsible way. Does the governor agree? Are you a robot? It’s so hard to tell the difference these days. GOOD MORNING. Happy Wednesday. Thanks for waking up with Playbook. You can text us at 916-562-0685 — save it as “CA Playbook” in your contacts. Or drop us a line at lkorte@politico.com and dgardiner@politico.com, or on X — @DustinGardiner and @Lara_Korte. WHERE’S GAVIN? In the Bay Area for the aforementioned AI summit and a U.S-China event on climate action.
|