California lawmakers sent a nationally consequential AI bill to Gov. Gavin Newsom’s desk last week — America’s most high-profile effort to date to put fresh legal guardrails around AI safety. It’s not clear that Newsom will sign it; influential California Democrats are digging in against it, including Nancy Pelosi. But a look at one of its key provisions — and the debate around it — shows just how much the argument on AI has been shifting since lawmakers started to worry about it last year. State Bill 1047 would hold companies liable for harms caused by their software, establish protection for AI whistleblowers, and put safety restrictions and requirements on AI models that reach a certain level of computational power, the closest any new American law has come to the European Union’s sweeping AI Act. This would be the most stringent AI law nationwide. Congress hasn’t passed any significant legislation on AI, despite extensive promises, and the Biden White House’s executive order is a mix of voluntary commitments and enforcement of existing law. It has renewed a debate that started in the earliest days of this AI hype cycle — about what exactly it is that regulators are trying to keep Americans safe from in the first place, and whether the most serious risks posed by AI are those lurking in the future, or already here. The California bill considers the possibility of AI causing “critical harms” to humanity, and singles out the largest and most powerful AI models for specific safety requirements. (Those that cost at least $100 million and use 10^26 flops, a measure of a system’s brute computational force — the same computing benchmark Biden used in his 2023 executive order — will be required to develop model-specific safety plans.) This focus on scale suggests that regulators fear what AI leaders like OpenAI’s Sam Altman and Meta’s Mark Zuckerberg have called a digital “superintelligence,” a sort of computer brain surpassing human capabilities, with all the instability that a hyper-powerful, autonomous, and to some extent unknowable system might bring. For years, the primary debate in AI circles has been between those (like Elon Musk, or former United Kingdom Prime Minister Rishi Sunak) who think this risk is real, if not imminent, and requires immediate and decisive action — and those who decry it as a self-serving fairy tale spun by Silicon Valley bigwigs who want regulators to put the kibosh on their competitors in the name of safety. If Newsom signs SB 1047 it would be a massive win for the former group (see Musk’s surprise endorsement last week), moving the conversation around AI safety and “superintelligence” from the realm of the theoretical into the reality of law. But is this really the most important concern about AI? Some critics argue that the “superintelligence” concerns are massively overblown, and that even a relatively “dumb” AI system has the potential to discriminate or otherwise sow civic havoc, as in the notorious case of the Dutch welfare fraud scandal where AI incorrectly penalized tens of thousands of recipients. But even many of those critics view SB 1047 as a step in the right direction, considering the onus it puts on AI giants to make sure their systems don’t cause harm. Entrepreneur and author Gary Marcus has been an outspoken critic of claims about the supposedly world-changing power of AI, but endorsed the bill on X and sees efforts to rein in AI companies as important even if the technology doesn’t live up to the hype. “Current AI, as mediocre as it is, does pose serious threats,” Marcus told DFD. “Because it is blind to truth but great at human mimicry, it has become an extremely powerful tool for generating misinformation at low cost and unprecedented scale,” he said, “which could undermine democracy, and lead to needless wars and the loss of life… There is already plenty of reason for concern, and we should be asking ourselves whether the benefits outweigh the risks.” Helen Toner, a former board member of OpenAI and current member of Georgetown University’s Center for Security and Emerging Technology, has offered qualified praise for the bill. She told DFD that whether or not AI achieves the godly powers feared by those like Musk and the bill’s most outspoken supporters, it’s worth taking AI companies’ outsize ambitions at face value. She cautioned that while a superintelligent AI might not be an immediate threat, the risk still bears watching. “It's hard to look seriously at the last 10 years of AI progress and argue that we shouldn't treat superintelligence in 10 or 20 years as a serious possibility,” Toner told DFD, “though of course ‘serious possibility’ is a far cry from ‘certainty,’ so it would also be silly to make policy as if it were a done deal.” It’s still unclear whether Newsom will sign SB 1047; he has until Sept. 30, and may not want to buck the roster of high-profile California Democrats who have come out against it. Either way, the debate has forced lawmakers, policy wonks, and even skeptics like Marcus to reckon with the AI giants’ sweeping ambition. “The fundamental point is that billions of dollars and thousands of the world's best minds are currently aimed at creating machines that can do increasingly sophisticated intellectual tasks, and so far they seem to be succeeding,” Toner said. “We don't know where those efforts will lead; if nothing else I'd say that should inspire some humility.”
|