California tackles digital superintelligence — maybe

How the next wave of technology is upending the global economy and its power structures
Sep 03, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Scott Wiener speaks.

Democratic California State Sen. Scott Wiener, the architect of California's State Bill 1047. | Rich Pedroncelli/AP

California lawmakers sent a nationally consequential AI bill to Gov. Gavin Newsom’s desk last week — America’s most high-profile effort to date to put fresh legal guardrails around AI safety.

It’s not clear that Newsom will sign it; influential California Democrats are digging in against it, including Nancy Pelosi. But a look at one of its key provisions — and the debate around it — shows just how much the argument on AI has been shifting since lawmakers started to worry about it last year.

State Bill 1047 would hold companies liable for harms caused by their software, establish protection for AI whistleblowers, and put safety restrictions and requirements on AI models that reach a certain level of computational power, the closest any new American law has come to the European Union’s sweeping AI Act.

This would be the most stringent AI law nationwide. Congress hasn’t passed any significant legislation on AI, despite extensive promises, and the Biden White House’s executive order is a mix of voluntary commitments and enforcement of existing law.

It has renewed a debate that started in the earliest days of this AI hype cycle — about what exactly it is that regulators are trying to keep Americans safe from in the first place, and whether the most serious risks posed by AI are those lurking in the future, or already here.

The California bill considers the possibility of AI causing “critical harms” to humanity, and singles out the largest and most powerful AI models for specific safety requirements. (Those that cost at least $100 million and use 10^26 flops, a measure of a system’s brute computational force — the same computing benchmark Biden used in his 2023 executive order — will be required to develop model-specific safety plans.)

This focus on scale suggests that regulators fear what AI leaders like OpenAI’s Sam Altman and Meta’s Mark Zuckerberg have called a digital “superintelligence,” a sort of computer brain surpassing human capabilities, with all the instability that a hyper-powerful, autonomous, and to some extent unknowable system might bring.

For years, the primary debate in AI circles has been between those (like Elon Musk, or former United Kingdom Prime Minister Rishi Sunak) who think this risk is real, if not imminent, and requires immediate and decisive action — and those who decry it as a self-serving fairy tale spun by Silicon Valley bigwigs who want regulators to put the kibosh on their competitors in the name of safety.

If Newsom signs SB 1047 it would be a massive win for the former group (see Musk’s surprise endorsement last week), moving the conversation around AI safety and “superintelligence” from the realm of the theoretical into the reality of law.

But is this really the most important concern about AI?

Some critics argue that the “superintelligence” concerns are massively overblown, and that even a relatively “dumb” AI system has the potential to discriminate or otherwise sow civic havoc, as in the notorious case of the Dutch welfare fraud scandal where AI incorrectly penalized tens of thousands of recipients.

But even many of those critics view SB 1047 as a step in the right direction, considering the onus it puts on AI giants to make sure their systems don’t cause harm. Entrepreneur and author Gary Marcus has been an outspoken critic of claims about the supposedly world-changing power of AI, but endorsed the bill on X and sees efforts to rein in AI companies as important even if the technology doesn’t live up to the hype.

“Current AI, as mediocre as it is, does pose serious threats,” Marcus told DFD. “Because it is blind to truth but great at human mimicry, it has become an extremely powerful tool for generating misinformation at low cost and unprecedented scale,” he said, “which could undermine democracy, and lead to needless wars and the loss of life… There is already plenty of reason for concern, and we should be asking ourselves whether the benefits outweigh the risks.”

Helen Toner, a former board member of OpenAI and current member of Georgetown University’s Center for Security and Emerging Technology, has offered qualified praise for the bill. She told DFD that whether or not AI achieves the godly powers feared by those like Musk and the bill’s most outspoken supporters, it’s worth taking AI companies’ outsize ambitions at face value.

She cautioned that while a superintelligent AI might not be an immediate threat, the risk still bears watching.

“It's hard to look seriously at the last 10 years of AI progress and argue that we shouldn't treat superintelligence in 10 or 20 years as a serious possibility,” Toner told DFD, “though of course ‘serious possibility’ is a far cry from ‘certainty,’ so it would also be silly to make policy as if it were a done deal.”

It’s still unclear whether Newsom will sign SB 1047; he has until Sept. 30, and may not want to buck the roster of high-profile California Democrats who have come out against it. Either way, the debate has forced lawmakers, policy wonks, and even skeptics like Marcus to reckon with the AI giants’ sweeping ambition.

“The fundamental point is that billions of dollars and thousands of the world's best minds are currently aimed at creating machines that can do increasingly sophisticated intellectual tasks, and so far they seem to be succeeding,” Toner said. “We don't know where those efforts will lead; if nothing else I'd say that should inspire some humility.”

amateurs of disguise

Jacob Wohl (right) and Jack Burkman speak at a podium.

Jacob Wohl and Jack Burkman speak at an Aug. 6, 2020, press conference, denying allegations of kidnapping against Wohl by Merritt Corrigan. | wohl-1/Flickr via Creative Commons

Some familiar political ne’er-do-wells are dressing up their old tricks in new, AI-themed garb.

POLITICO’s Daniel Lippmann reported yesterday on the return of Jack Burkman and Jacob Wohl — last seen getting convicted for trying to suppress Black voters via robocall — as entrepreneurs promising to integrate AI into clients’ lobbying efforts. Those clients have temporarily included Toyota, consulting firm Boundary Stone Partners and drug company Lantheus, all, apparently, while Burkman and Wohl have disguised their identities as “Bill Sanders” (Burkman) and “Jay Klein” (Wohl).

Anonymous employees confirmed to Daniel after viewing images of Burkman and Wohl that “Bill” and “Jay” were in fact the same men, one calling them “out of touch with reality” and saying that “Working for them you knew you were never getting the full story and were often left trying to find the truth.” Wohl did not respond to a request for comment, while Burkman hung up on a call from POLITICO.

“If I knew who they were and the shit that they had done related to their FCC case, I wouldn’t have touched it with a 10-foot pole,” one former employee said. “It just feels really dirty and I’m also happy to be away from it. I’m done, I’m gone.”

starliner comes home (soon)

The saga of Boeing’s troubled Starliner capsule is nearing its conclusion, as the company and NASA plan to bring the vessel back to Earth unmanned.

The capsule is currently docked at the International Space Station, where a return initially scheduled for months ago has been repeatedly delayed due to technical problems. Meanwhile, its crew has also stayed at the ISS much longer than expected, leading to a defensive news cycle where NASA and Boeing have insisted that the astronauts are not, technically, “stranded.”

Then on Monday NASA said in a statement that despite a new set of strange noises emitted from the capsule, first noticed by a Michigan-based space enthusiast, Starliner will detach from the ISS and fly on autopilot back to Earth on Friday. It’s a crushing blow for Boeing’s space program, as the two astronauts who rode Starliner to the International Space Station will now return on a capsule manufactured by rival SpaceX.

NASA announced that the “pulsing sound” has stopped, and that “feedback from the speaker was the result of an audio configuration between the space station and Starliner… The speaker feedback Wilmore reported has no technical impact to the crew, Starliner, or station operations, including Starliner’s uncrewed undocking from the station no earlier than Friday, Sept. 6.”

On Friday NASA announced the crew for the SpaceX Crew Dragon mission that will launch later this month and retrieve Starliner astronauts Butch Wilmore and Suni Williams — carrying a complement of two astronauts instead of the usual four, to accommodate Wilmore and Williams’ hitching a ride back to Earth.

TWEET OF THE DAY

I wish there were a way to covertly release a handful of Chase ATM-like glitches a few times a year to reveal who can’t be trusted to live in a society

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post