What AGI hype means for Washington

How the next wave of technology is upending the global economy and its power structures
Jan 08, 2025 View in browser
 
POLITICO Digital Future Daily Newsletter Header

By Christine Mui

With help from Derek Robertson

Sam Altman speaks during a conference with the word OpenAI behind him.

OpenAI CEO Sam Altman. | Jason Redmond/AFP via Getty Images

OpenAI’s Sam Altman kicked off 2025 with a daring declaration: The company now knows how to build artificial general intelligence, and is ready to turn its attention to the glorious future of superintelligence.

That would be at once thrilling and scary. Artificial general intelligence — an AI that can outperform humans across the board, adapting to new challenges as the human brain does — is a massively hyped, long-sought milestone for the technology. Critics worry it could lead to mass job displacement, social disruption and possibly disastrous existential consequences. Believers see AGI as inevitable, and welcome its potential to create economic abundance and solve the world’s most pressing challenges.

And as a business proposition, it’s significant to the company for a very specific reason: Microsoft holds an exclusive cloud deal granting it access to OpenAI's technology, which ends once AGI is achieved. Should OpenAI’s products indeed cross the threshold to AGI, it would unlock obvious financial rewards for the company and Altman himself.

Altman’s tease forces a key question that clearly matters to the company, and could have real ramifications for policy as well: How is AGI being defined? What is it, and if we reach it, how will we know?

Altman himself, in an interview with Businessweek that inspired his blog reflections, admitted that pinpointing exactly when his company will reach AGI is tough because “we’re going to move the goalposts, always.” When asked why OpenAI (but not Altman himself) has stopped publicly discussing AGI, he explained that it became “a very sloppy term.”

The company uses a five-level scale to track progress toward human-level problem-solving — and Altman pointed out that for all of them, “you can find people that would call each of those AGI.” The goal, Altman said, has been to give clearer updates on progress, not dispute what’s AGI or not.

Gary Marcus, an outspoken AI skeptic, has called out how OpenAI and developers keep shifting what they consider AGI. Other AI experts quickly noted that the language in Altman’s bold declaration was carefully couched, raising uncertainties about whether simply knowing how to build AGI would be enough to actually create it (let alone create it responsibly). As Alamir Novin, a University of South Carolina professor and AI researcher, pointed out, “he’s not saying we have built AGI, or even if we can really have a clear path towards it.” It doesn’t tell us what, if anything, has changed to resolve remaining technical, conceptual, and regulatory challenges.

Julian Togelius, an associate professor at New York University who works on artificial intelligence, wrote a whole book about AGI in which he spent a lot of time trying to figure out what it meant — and came to the conclusion that people should stop using the term altogether.

"It's lazy and misleading," he told DFD, saying he’s found that definitions for AGI are either overly broad, fail to recognize human ability as just a fraction of the technology’s potential, or simply incoherent and unhelpful. “I think Sam Altman is literally saying this because it’s a contractual meaning in this contract with Microsoft that is worth many billion dollars.”

As some see it, Altman is merely acting as CEOs do, responding to pressure to keep his company in the spotlight, convince investors it’s ahead of the curve and market its future products as the technology that businesses across all sectors will inevitably want to buy. He also shared this month that OpenAI is not profitable, actually losing money on its $200-per-month pro subscriptions for ChatGPT and needs to raise more capital than it imagined.

“He is making these grandiose statements where there’s no guarantee. Is this just his CEO talk, for financial incentive, or to maintain stability? Is he being worried about the next four years and losing power somehow or his position as the leading LLM right now?” said Novin. “It’s going to get at least some investors’ interest.”

A further interpretation is that by intensifying the stakes, Altman can also ask for more from Washington. If AGI is close to ready, there’s more pressure and urgency for the government to step up its support and coordinate on a strategy to win the global AI competition, developers say. The first nation to develop AGI could hold an overwhelming, unsurpassable advantage in military and economic capabilities.

There is another, sometimes countervailing, concern that AGI will be much more disruptive than current AI models that only excel at certain tasks, and thus should attract policy attention for safety reasons. Many proponents of AGI development, including Altman, think the best-case scenario can outweigh the dangers only if it comes with with the proper safeguards, regulations and alignment with societal values.

If nothing else, Altman seemed quite attuned to the political moment. He donated $1 million to Donald Trump’s inauguration, explaining it with a prediction that “AGI will probably get developed during this president’s term” and so “getting that right seems really important.”

OpenAI has been beefing up its D.C. policy shop in preparation to angle for more government support, hoping to get leaders in Washington and states to sign onto an ambitious plan to secure the computing and energy resources for AI’s future, as my colleague Mohar Chatterjee recently reported. Some of its recent key hires include staffers formerly working on the Biden administration’s microchip reshoring strategy and from offices of key Hill Republicans like South Carolina Sen. Lindsey Graham. Altman said this week that the most important priority for Trump to honor is clearing the way to build AI infrastructure like power plants, delivering energy to run chip computations, and data centers in the U.S.

The future of AGI, as Altman sees it, also depends on breakthroughs in at least one other technology, which demands its own intervention from Washington in the form of regulatory approval, permitting fixes, and other supportive policies. Central to Altman’s plan for addressing AI’s soaring energy appetite — a major roadblock to progress in the industry — is to crack the code on nuclear fusion, particularly with a solution being developed by the startup Helion, where Altman just happens to serve as chairman. Only, that’s a technology steeped in its own uncertainty, and fuel for another hype cycle.

ai's fork in the road

Arati Prabhakar, then the nominee to serve as director of the White House Office of Science and Technology Policy, testifies July 20, 2022, before the Senate Commerce, Science, and Transportation Committee.

Arati Prabhakar, then the nominee to serve as director of the White House Office of Science and Technology Policy, testifies July 20, 2022, before the Senate Commerce, Science, and Transportation Committee. | Francis Chung/POLITICO

When it comes to AI, the incoming Trump administration and Republican Congress are promoting profound changes. The president-elect has promised to axe Biden’s sweeping AI executive order as one of his opening acts, including sections meant to address the threat that biased algorithms could exacerbate racial discrimination, inequality and other forms of harm if left unchecked. MAGA’s anti-woke warriors, including Elon Musk and key GOP lawmakers, are rallying to strip AI of what they see as burdensome guardrails that will stifle innovation and push “radical left-wing ideas” onto the tech’s development.

Biden’s Director of Science and Technology Arati Prabhakar realizes the tides are turning — and on Tuesday, she sounded the alarm on the impact of the shift in priorities. “There is a future ahead in which we make everything very efficient, but that in the process, the future grows very, very dark, and that is because discrimination and bias get implemented at massive scale in housing and in lending, in criminal justice and in health care,” Prabhakar said at a National Academies of Sciences, Engineering, and Medicine event.

Prabhakar warned of a world where a lack of safeguards makes privacy a thing of the past — with every move, every click, every worker under constant surveillance. Misinformation and deepfakes spiral out of control, warping reality, while rampant AI fraud destroys all trust.

Prabhakar defended the Biden administration’s progress in making it illegal to use AI to impersonate businesses or government agencies, launching the AI Safety Institute to set standards for trustworthy AI, and laying the groundwork for deep, fundamental research to drive future progress. She touted a second, brighter future of AI advancing rapid drug development, closing educational gaps and making weather forecasts more accurate.

“It's going to be wonderful for productivity, if we can get it right,” she said.

furious france

A French minister is pleading with the European Commission to stop Elon Musk’s political meddling.

POLITICO’s Émile Marzof reported for Pro subscribers today that Jean-Noël Barrot said he hade “called on the European Commission on several occasions to take much more vigorous advantage of the tools democratically given to it to dissuade such behavior” and wants Brussels to pursue action against Musk and X under the Digital Services Act.

Potential violations of the DSA could include Musk’s upcoming hosting of German far-right leader Alice Weidel on the platform. If the Commission finds that Musk has put his thumb on the scale to an extent that violates the DSA, it could levy fines of up to 6 percent of Musk’s annual global revenue.

a plea for crypto caution

Commodity Futures Trading Commission Chair Rostin Behnam testifies on Sept. 15, 2022, before the Senate Agriculture, Nutrition and Forestry Committee.

Commodity Futures Trading Commission Chair Rostin Behnam testifies on Sept. 15, 2022, before the Senate Agriculture, Nutrition and Forestry Committee. | Francis Chung/POLITICO

Weeks before leaving office, a top regulator is calling for Congress to take action on crypto as an industry-friendly Trump administration comes in.

POLITICO’s Declan Harty reported for Pro subscribers on Commodity Futures Trading Commission Chair Rostin Behnam’s warning in a speech that risks posed by crypto to financial stability are “intensifying in the absence of federal legislation.”

“We’ve seen this before in our history where we leave large swaths of finance outside of oversight and responsibility, and we have seen time and time again that it ends badly,” Behnam said at the Brookings Institution.

The CFTC has been a key crypto regulator during President Joe Biden’s administration, and Behnam argued that going forward it will “blow past our historical roots in the agricultural markets and leave us at the threshold of the increasing expansion, transformation, and sometimes, gamification of the derivatives markets.”

post OF THE DAY

For the first time in my life I just drove a keyless modern car that told me what to do, and had tons of small things I couldn’t override — when to put on brights, when to speed up suddenly, all  in the name of the safety algorithm, etc — and I’m surprised with how angry and sad it made me

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post