OpenAI has expanded its policy footprint significantly over the past year, publishing a “blueprint” for AI infrastructure development in November and setting up shop in capitals across Europe. This morning the company published another such “blueprint,” this time focusing on competition with China and domestic safety concerns. It’s one more example of the biggest AI developers trying to shape the policy conversation around everything from energy to defense and national security, as their technologies inch toward the center of it — and a hawkish second Trump administration is set to take office. OpenAI has been a major proponent of the idea that AI is a civilizationally game-changing technology (approaching “artificial general intelligence,” in industry parlance). Initially, its CEO Sam Altman loudly stumped in Washington for regulation of the biggest and most powerful systems; more recently the company has been pushing an expansionist line. Now Elon Musk, one of the company’s co-founders (who has his own AI company and has become arguably OpenAI’s most bitter foe), is poised to wield major influence in the second Trump White House, raising questions about what kind of powerful frenemy he might be. Of note given Meta’s recent pull away from content moderation (and Musk’s “free speech” politics): An early draft of the OpenAI blueprint shared with DFD contained language saying “models should be trained toward neutral political values as their default,” but that language wasn’t present in the final version published today. (A spokesperson for OpenAI said the company “simplified the language for the blueprint,” and cited recent remarks from Altman about AI and politics made during an interview with The Free Press’ Bari Weiss.) As the company plans to bring its newest and most sophisticated AI tools to Washington this month for demonstrations with lawmakers, DFD spoke with OpenAI’s vice president for global affairs, Chris Lehane, about how the company plans to work with President-elect Donald Trump’s White House, whether it anticipates any static from Musk when it comes to their goals for governance, and their vision for AI as “an issue that transcends partisan politics.” An edited and condensed version of the conversation follows: How did you settle on these priorities going into 2025? Sam recently shared a little bit of where he sees the technology going, seeing the path to AGI and then superintelligence. We'll be in D.C. this month doing product previews and demos for a D.C. policymaker audience to give them a concrete sense of what’s coming. The pace of the technology is moving forward, and we think it's important that [policymakers] have access and visibility. Second, with the DeepSeek news [about the Chinese company developing AI systems to rival American companies’, despite U.S. attempts to slow down the industry], whether you want to call it a Sputnik moment or whatever, it’s made obvious what we have been saying for some time, that there is a real competition between the U.S. and the Chinese Communist Party on this. Third, obviously, you have a new administration coming in, a new Congress coming in, state legislatures. We believe the U.S. needs to be looking through the front window and not necessarily the rear view mirror, you know, in terms of how we start to think about national security and economic competitiveness and how critical they are to U.S. success. This is a time where we need to think big, act big and build big given the stakes. How do you envision your relationship with this incoming administration, given its ties to the industry? Have you been in contact with David Sacks, Trump’s pick for an AI czar? We’ve been doing all the conventional stuff that any smart company would do when engaging with an incoming administration. The clear signal is that this is an administration that is really focused on the national security and competitiveness issues, and this is an issue that does transcend partisan politics. The document is trying to get at a mindset, where I think a lot of the conversations around AI policy have been looking backwards as opposed to leveraging the opportunity that exists here and how important it is. We're trying to be pro-innovation here, not for the sake of innovation, but because there are huge stakes. I had a minor role at the Clinton White House in the mid-1990s working on the Telecommunications Act. We wanted to make sure that the U.S. was the center of what was then called the World Wide Web, and now the digital economy. We’re now in another geopolitical moment where we need to have an overarching strategy about what we want to try to accomplish. The blueprint recommends a consortium for AI developers to figure out best practices for working with the national security state. What would that look like, and why is it necessary? I was having some conversations in D.C., and folks were talking about how when Pearl Harbor happened, literally a month later Detroit is building airplanes and tanks. God forbid we're in a similar situation, but if a national challenge happens, that ability would have some real blocks, given the rules and regulations around procurement. The defense space is made for an analog world and not a digital world. The consortium would bring together cutting edge companies so they could work quickly with the US government. One of the incoming president's closest advisors, Elon Musk, is suing your company and has been extremely hostile to it in public. Is that something you've considered regarding your relationship with the next administration? At the end of the day I think anyone who's an American is going to want to make sure that we're positioned as well as we can be on national security, and as the premier innovator here we have to be in the middle of that conversation if your desired goal is for the U.S. to be leading and winning. Regardless of whatever your particular perspective may be, and how you feel about a company or not, at the end of the day I do think everyone coalesces around that.
|