Hello, and welcome to another edition of the Future in 5 Questions. This week, we interviewed Jack Clark, co-founder and head of policy at Anthropic, the company behind frontier artificial intelligence models like Claude. Before this role, Clark was OpenAI’s policy director. We talked about how people are underestimating what AI will be able to do in a few years, hardening export controls to ensure AI technology is not stolen, and the power of the belief that scaling up compute will mean better AI. What’s one underrated big idea? People underrate how significant and fast-moving AI progress is. We have this notion that in late 2026, or early 2027, powerful AI systems will be built that will have intellectual capabilities that match or exceed Nobel Prize winners. They'll have the ability to navigate all of the interfaces… they will have the ability to autonomously reason over kind of complex tasks for extended periods. They'll also have the ability to interface with the physical world by operating drones or robots. Massive, powerful things are beginning to come into view, and we're all underrating how significant that will be. What’s a technology that you think is overhyped? Technology that I think is overhyped — and I wish wasn't — is virtual reality. I was the biggest, wannabe fan of virtual reality for a long time. I remember trying on VR goggles in 2012 or 2013 and desperately hoping the technology was going to get better. I bought a VR headset during the pandemic to play a video game called “Half-Life: Alyx,” and that was the only thing I used it for. And I tried, but basically the end result was I spent a few hundred dollars to play one game, and now it's gathering dust in my garage. Everyone wants a better way to interface with computers, and everyone was hoping VR would be that thing, but it still doesn't quite feel like it's cooked. What could the government be doing regarding technology that it isn’t? There's two genres of work the government can be doing. One is preparing to take advantage of this technology in a larger scale way than it currently is. Within that, it's about securing the resources that you need to make use of AI. That includes compute. It includes the further domestic build-out of semiconductor manufacturing. It includes energy — you obviously want to build out energy provisioning in the United States and in allied countries. And it's also about instrumenting the economy, through things like surveys and some of the regular data gathering exercises we do to see the effects of AI in the economy, because it's going to drive huge amounts of demand, and it's going to drive changes in how the economy works. The second bucket is about national security. The other side of securing compute domestically is hardening export controls for countries of concern. Finding a way to continue the work of the first Trump administration on hardening export controls on compute that goes into AI is going to be really important. So will be finding ways to improve the security of frontier labs like Anthropic, where we're all investing in computer and personnel security. But there's greater work to be done here to ensure that these very valuable assets being developed by the Western labs aren't easy to steal if you're a nation-state, perhaps one motivated by export controls to try and steal the IP of Western labs. And finally, making a further investment in national security, testing and evaluation of these systems. As DeepSeek showed, it's not just American companies that are going to produce notable systems, and we are going to need to be able to test AI systems that appear for national security properties, whether those be foreign or domestic systems. What book most shaped your conception of the future? It is a book called “There Is No Antimimetics Division” by qntm. It’s about a research organization that deals with ideas that are dangerous to think about. And it studies antimemetics, which are ideas that — by thinking about them — will erase themselves or make themselves actually dangerous to you. It's changed my thinking about the future because a lot of what we work on in artificial intelligence research is about coming up with simple ideas like scaling laws, which was this idea that you could make systems better by increasing the amount of compute in them. And the ideas themselves have tremendous power. Once you've realized that you can improve the performance of AI systems by adding more compute, you have a deep competition on compute scale-up between the U.S. and China. And the U.S. labs themselves are in fierce competition with one another to build the largest compute infrastructure that's ever been built on the planet, just to get better AI systems — because of a single, simple idea. This book is a really fun, kind of playful attempt to take an idea like that and make it extremely real and also a fun sci-fi page-turner. I recommend it to everyone. What has surprised you the most this year? The thing that's really surprising is that AI systems can now recover from errors without you telling them that they've made a mistake. That has happened with this new class of reasoning models that Anthropic and competitors like DeepSeek put out. In 2024 when the AI system made those mistakes, it would just keep making the mistake, but in more complicated and expensive ways and nothing good happens from it. Now, the AI system makes a mistake and then says to itself after a minute or so, “Huh. It seems like none of this is working. I should check my work.” And then it will say, “Huh. It seems I made a mistake a while back, I should restart from there.” And that is a total change in how these systems work. It's surprising to me that it's happened this quickly, and it returns to what I said at the beginning of this interview: really powerful AI systems are within sight, and one of the things to remember is they they keep getting better faster than even experts like myself expect, which is just very unusual in most fields. In AI, everyone keeps on being surprised by how well this technology works, and things keep on arriving ahead of schedule.
|