The debate over what, exactly, constitutes “safe” artificial intelligence continues to heat up. If you’re not part of the hyper-online community of AI wonks, engineers and enthusiasts, you might have missed a spat this week on X between Democratic California state Sen. Scott Weiner and AI nonprofit director Brian Chau, who accused the lawmaker in a lengthy thread of attempting to “crush OpenAI’s competitors” and “hurt startups and open source” with his proposed AI legislation. Chau argued that by requiring licensing and regulation for AI models that meet similar benchmarks to the industry-leading ones from OpenAI, the state is effectively making it impossible for smaller AI companies to compete. Weiner shot back, calling the critique “outright fear-mongering” and saying lawmakers have worked with the startup community to make sure it’s workable for all participants in the market. It’s not just Silicon Valley lawmakers who are trying to put the regulatory clamps on the most powerful AI models: A bipartisan group of senators introduced legislation in April that would establish federal oversight for them, and the European Union’s AI Act establishes transparency requirements for so-called “foundation models.” President Joe Biden’s executive order on AI set specific, but as yet not achieved, limits on computing power after which AI systems will merit extra regulatory attention. A major argument in the ongoing efforts by regulators to tackle AI is what, exactly, constitutes “safety.” For some researchers the issues posed by AI aren’t all that different from tech platforms more generally; they see the tech carrying largely the same risks of privacy invasion and media manipulation that have plagued the app-dominated social media era. Others, including Elon Musk and Sam Altman, see AI as a radical new force that could be even more frightening than nuclear weapons — devoting enormous resources to the problem of “alignment,” which could mean anything from ensuring AI serves human goals to preventing it from destroying the world. The Bletchley Park Declaration, signed by representatives of 28 countries and the European Union last year, warned that “Particular safety risks arise at the ‘frontier’ of AI… There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” The debate can be reduced to two crude, equally Manichean views. In one, a powerful cabal of wealthy Silicon Valley developers are rattling regulators with ghost stories about “misalignment” in an attempt to scare them into locking smaller competitors out of the market with onerous regulation. In the other, reckless engineers bent on ushering in a techno-apocalypse are fighting for a digital anarchism that could lead to bioterrorism or nuclear winter via AI mishap. Lost in the rhetoric around AI safety is a more robust understanding of what powerful models like those used in ChatGPT actually do, and where they stand in a tradition of computing that goes back to the Turing test — no mere intellectual exercise when Washington lawmakers are scrambling to regulate a field they barely understand, surrounded on all sides by shadowy, well-funded lobbying interests. With that in mind I called David McDonald, a decades-long veteran of the AI industry and developer of some of the earliest language algorithms, to see how it looks to someone who has shepherded what we now call “generative AI” from its figurative infancy. McDonald, who professed to be a fan of the AI critic Gary Marcus, expressed his frustration with what he called a “computer illiterate” Congress. But even more than that, he warned that the current discourse around AI misunderstands something very fundamental about the technology, which he knows all too well from decades of studying semantics and language generation in computer systems: It’s not just that chatbots don’t “think,” but that they cannot think. “I know enough about the brain to know that everybody is idiosyncratic when it gets down to the nuts and bolts,” McDonald said. “I’m very curious to see what happens, but I don't see any danger… [worry about human-like AI] presumes common sense, and that computers have rich intentions, and at that point it’s a different beast.” McDonald’s research and design work has focused on semantic algorithms that are tailored for specific uses, unlike the general-purpose predictive models that have captured the public imagination of late. He cited a 2021 paper on those models as “stochastic parrots” that risk, with their massive-scale hoovering of data, directing “resources away from efforts that would facilitate long-term progress towards natural language understanding, without using unfathomable training data.” “We would always be generating [language] for a purpose, where the thing we're generating has some intent, and does not have much emotion,” he said. “Intent and semantics are alien concepts to a large language model, they don’t make sense. Large language models are just walking the most interesting statistical paths.” Still, McDonald isn’t a complete LLM skeptic: He acknowledged that their intuitive, (mostly) polished output creates a powerful mystique that makes it all the more important for regulators to truly understand what they can and cannot do. And companies obsessed with “alignment” and the possibility of “artificial general intelligence,” like OpenAI and Meta, generally frame their alignment work — at least in public — as more of a project to help AI help humanity, and not to stop a rogue, “sentient” system from ending it. Right now, nonprofits linked to those very companies are staffing congressional offices with the goal of fixing the lack of education that McDonald lamented. Lawmakers and regulators face a significant challenge in cutting through the morass of competing interests and ideologies that surround AI safety at the moment to figure out what risk the technology truly poses. They might, ultimately, have no choice but to do what would have occurred first to McDonald’s generation of computer scientists: Open up the machines and take a look inside themselves.
|