OpenAI’s Sam Altman kicked off 2025 with a daring declaration: The company now knows how to build artificial general intelligence, and is ready to turn its attention to the glorious future of superintelligence. That would be at once thrilling and scary. Artificial general intelligence — an AI that can outperform humans across the board, adapting to new challenges as the human brain does — is a massively hyped, long-sought milestone for the technology. Critics worry it could lead to mass job displacement, social disruption and possibly disastrous existential consequences. Believers see AGI as inevitable, and welcome its potential to create economic abundance and solve the world’s most pressing challenges. And as a business proposition, it’s significant to the company for a very specific reason: Microsoft holds an exclusive cloud deal granting it access to OpenAI's technology, which ends once AGI is achieved. Should OpenAI’s products indeed cross the threshold to AGI, it would unlock obvious financial rewards for the company and Altman himself. Altman’s tease forces a key question that clearly matters to the company, and could have real ramifications for policy as well: How is AGI being defined? What is it, and if we reach it, how will we know? Altman himself, in an interview with Businessweek that inspired his blog reflections, admitted that pinpointing exactly when his company will reach AGI is tough because “we’re going to move the goalposts, always.” When asked why OpenAI (but not Altman himself) has stopped publicly discussing AGI, he explained that it became “a very sloppy term.” The company uses a five-level scale to track progress toward human-level problem-solving — and Altman pointed out that for all of them, “you can find people that would call each of those AGI.” The goal, Altman said, has been to give clearer updates on progress, not dispute what’s AGI or not. Gary Marcus, an outspoken AI skeptic, has called out how OpenAI and developers keep shifting what they consider AGI. Other AI experts quickly noted that the language in Altman’s bold declaration was carefully couched, raising uncertainties about whether simply knowing how to build AGI would be enough to actually create it (let alone create it responsibly). As Alamir Novin, a University of South Carolina professor and AI researcher, pointed out, “he’s not saying we have built AGI, or even if we can really have a clear path towards it.” It doesn’t tell us what, if anything, has changed to resolve remaining technical, conceptual, and regulatory challenges. Julian Togelius, an associate professor at New York University who works on artificial intelligence, wrote a whole book about AGI in which he spent a lot of time trying to figure out what it meant — and came to the conclusion that people should stop using the term altogether. "It's lazy and misleading," he told DFD, saying he’s found that definitions for AGI are either overly broad, fail to recognize human ability as just a fraction of the technology’s potential, or simply incoherent and unhelpful. “I think Sam Altman is literally saying this because it’s a contractual meaning in this contract with Microsoft that is worth many billion dollars.” As some see it, Altman is merely acting as CEOs do, responding to pressure to keep his company in the spotlight, convince investors it’s ahead of the curve and market its future products as the technology that businesses across all sectors will inevitably want to buy. He also shared this month that OpenAI is not profitable, actually losing money on its $200-per-month pro subscriptions for ChatGPT and needs to raise more capital than it imagined. “He is making these grandiose statements where there’s no guarantee. Is this just his CEO talk, for financial incentive, or to maintain stability? Is he being worried about the next four years and losing power somehow or his position as the leading LLM right now?” said Novin. “It’s going to get at least some investors’ interest.” A further interpretation is that by intensifying the stakes, Altman can also ask for more from Washington. If AGI is close to ready, there’s more pressure and urgency for the government to step up its support and coordinate on a strategy to win the global AI competition, developers say. The first nation to develop AGI could hold an overwhelming, unsurpassable advantage in military and economic capabilities. There is another, sometimes countervailing, concern that AGI will be much more disruptive than current AI models that only excel at certain tasks, and thus should attract policy attention for safety reasons. Many proponents of AGI development, including Altman, think the best-case scenario can outweigh the dangers only if it comes with with the proper safeguards, regulations and alignment with societal values. If nothing else, Altman seemed quite attuned to the political moment. He donated $1 million to Donald Trump’s inauguration, explaining it with a prediction that “AGI will probably get developed during this president’s term” and so “getting that right seems really important.” OpenAI has been beefing up its D.C. policy shop in preparation to angle for more government support, hoping to get leaders in Washington and states to sign onto an ambitious plan to secure the computing and energy resources for AI’s future, as my colleague Mohar Chatterjee recently reported. Some of its recent key hires include staffers formerly working on the Biden administration’s microchip reshoring strategy and from offices of key Hill Republicans like South Carolina Sen. Lindsey Graham. Altman said this week that the most important priority for Trump to honor is clearing the way to build AI infrastructure like power plants, delivering energy to run chip computations, and data centers in the U.S. The future of AGI, as Altman sees it, also depends on breakthroughs in at least one other technology, which demands its own intervention from Washington in the form of regulatory approval, permitting fixes, and other supportive policies. Central to Altman’s plan for addressing AI’s soaring energy appetite — a major roadblock to progress in the industry — is to crack the code on nuclear fusion, particularly with a solution being developed by the startup Helion, where Altman just happens to serve as chairman. Only, that’s a technology steeped in its own uncertainty, and fuel for another hype cycle.
|