As quickly as big tech companies are developing the tools they say will change the world, their own world is shifting under their feet. A series of minor earthquakes in the artificial intelligence landscape in recent weeks have shown that when it comes to defining AI “safety” — that is, exactly what risks AI poses in the first place, and how they should be prioritized — hardly anything is settled, and industry now appears to be winning an influence race in which well-funded intellectuals had jumped out to an early lead. On Sunday, California’s Gov. Gavin Newsom vetoed his state’s SB 1047, which would have put restrictions on the most powerful AI models and made companies liable for “catastrophic” harms they might have unleashed — striking a blow against worried AI-watchers (like Elon Musk) who believe the technology’s world-wrecking potential means its development should be paused until worst-case scenarios can be ruled out. That was just the latest in a series of public setbacks for those who believe “artificial general intelligence,” an AI system as intelligent or more intelligent than a human, is right around the corner and needs to be guarded against. Last week a wave of departures from OpenAI shifted the company further away from its initial safety-centric mission — and suggested to some observers that a renewed skepticism might be warranted about OpenAI’s claims about developing AGI. And Meta’s Nick Clegg recently tore into former U.K. Prime Minister Rishi Sunak for his perceived excessive worry about the existential risk posed by powerful AI. “Vetoing SB 1047 is a major political setback the AI safety community, not because it would have done a lot for safety in itself, but because it did so relatively little and yet still failed to get over the line,” said Samuel Hammond, economist at the right-leaning tech think tank Foundation for American Innovation. Hammond, a big believer in AI’s revolutionary potential, lamented the veto as potentially opening the door to another, harsher round of regulation more akin to the European Union’s far-reaching AI Act. As Newsom suggested in his veto message for SB 1047, the next AI bill that comes out of California is likely to address a wider group of specialized uses for the technology. That favors a vision of AI safety that’s more concerned with what the technology could do here and now with regard to privacy, copyright and labor issues — rather than the theoretical catastrophic risks worried about by figures like Musk, or thinkers at the Future of Life Institute, whose March 2023 letter calling for an AI “pause” sent shockwaves through the AI community. That would be another unwelcome blow to the effective altruism community, a group of largely Silicon Valley-associated thinkers who advocate for a utilitarian, data-driven approach to humanitarian and social policy, and who have made steady inroads in Washington. While the original version of effective altruism amounts to doing more rigorous research about charitable giving, or purchasing mosquito nets en masse, maybe the most high-profile branch of the movement has been those concerned with “longtermism,” or taking actions that will maximally ensure humanity’s welfare in the future. In recent years, altruists have become extremely focused on the long-term risks of AI. As global regulators’ eyes turn back toward the more here-and-now potential consequences of AI, and AI-watchers get impatient for the arrival of GPT-5, never mind a “machine god,” the movement risks losing ground to a more earthbound view of how humans should regulate AI. EA is “certainly on a back foot, but nothing it can’t recover from,” said FAI’s Hammond. (He personally thinks that AGI skeptics are being misled by the limitations of current computer hardware — and that in fact we’re well on track for the development of an exciting, but potentially dangerous superintelligence.) Jason Green-Lowe, executive director at the EA-associated Center for AI Policy nonprofit, told POLITICO’s Jeremy B. White and Lara Korte Monday that SB 1047 would have “bought us some time and showed the way forward,” and lamented that “there’s even more need for Congress and the next president to take a stand on AI safety.” But now that the biggest bill addressing AI from America’s de facto tech regulator (the state of California) is dead, the hard, messy progress of figuring out how, where, and why to tackle AI could start all over again, with no clear advantage for any party in the debate — except, of course, the AI companies themselves.
|