Hello, and welcome to this week’s edition of the Future in Five Questions. This week I spoke with Rohit Krishnan, an engineer, economist, venture capitalist and all-around AI raconteur who writes the Strange Loop Canon Substack. Krishnan recently proposed in an essay a new framework for AI regulation based on “human flourishing,” and here we discussed the difficulty of educating government on AI, the difficulty of identifying talent (in any setting) and a whole slew of sci-fi book recommendations. An edited and condensed version of the conversation follows: What’s one underrated big idea? Almost everything in the world is downstream of strong talent selection, and our methods of selecting talent are getting worse as we try to do it even harder. Tests that used to work, work less well now. Ranks are gamed. Interviewing is hard. Even proof of work is getting gamed. Getting into a college, or a job, is impossibly hard unless you jump through hundreds of hoops. But these also fail as they assume the inherent measurability of inputs and mapping them to outputs, in the case of people. Unfortunately the world is too high-dimensional for it. There are way too many paths to excellence. The recent Nobel Prizes for Physics and Chemistry going to those who worked in AI is a perfect case in point. We end up stuck with Berkson's paradox , where the more we try to find something at the very top of a distribution by measuring ever so finely, the very attributes you're measuring it with become anticorrelated rather than correlated. What’s a technology that you think is overhyped? Crypto. Because you can't really use it for anything outside crypto still. Which is absolutely crazy for something that had the aspiration of changing the entirety of modern commerce or even the monetary system. Fairly frustrating actually. I was a big fan of at least the organizational experimentation. What book most shaped your conception of the future? One book is hard to choose! And it changes often. My blog is named after the “strange loops” from “Gödel, Escher, Bach: an Eternal Golden Braid” by Douglas Hofstadter, and just like good Hamming questions the answers to this change over time. So today I'd say “The Diamond Age” by Neal Stephenson. The very notion of the Young Lady's Illustrated Primer by itself should place it at the highest echelon of what the future ought to look like. That, and the neo-Victorian future transformed by nanotechnology. Combining material science and education and technology in only the way Neal Stephenson can do, to truly change the way the world could work. The other is Douglas Adams’ “Hitchhiker's Guide to the Galaxy.” Because I do think any sense of the future that isn't wildly weird is liable to reinforce our perceptions rather than open up our minds. I think of “The Culture” by Iain Banks in the same vein. I read them like I'm reading Wikipedia for a future parallel universe that doesn't yet exist. What could the government be doing regarding technology that it isn’t? Trying to understand it more, by ensuring they actively learn before trying to figure out how it fits into their policy worldview. AI policy is a good place to start, where the focus as soon as it was seen as important was to try and create laws and regulations as quickly as possible. Interest groups flocked in and destroyed any semblance of amity. What would be better is to try and not do things while trying to learn about the new technologies — what they do, how they work, how they impact and change the world around us. This might require setting up new types of institutions for research and testing and knowledge gathering and dissemination, rather than relying on law and policymaking as the primary government job. This is hard because the government isn't a singular entity, it's a collective of a large number of individuals, incentives at cross purposes, and organizational inertia acting over multiple hierarchical scales. But that's also the good news, because it means you can start small and see if it scales. The Advanced Research and Invention Agency and the U.K. government’s efforts to do AI evaluations are a great case study here. What has surprised you the most this year? The development of decentralized training for AI models, from DiLoCo and DiPaCo from Google DeepMind to Distro coming up from Nous Research. It completely changes the game in terms of how we ought to think about large models and what we can do to train them. This means pure compute thresholds are not going to be very useful, and that we will have even less of a way to centrally control the means of knowledge production.
|