Hello, and welcome to this week’s installment of the Future in Five Questions. Today I spoke with Ben Recht, an engineering and computer science professor at the University of California Berkeley and author of the arg min Substack, where he writes about “the history, foundations, and validity of decision making by people and machines.” We discussed his belief that while “artificial intelligence” per se is overrated, the underlying statistical technology is deeply underrated, the late anarchist and anthropologist David Graeber, and the shortcomings of AI legislation. An edited and condensed version of the conversation follows: What’s one underrated big idea? The power of statistical prediction and pattern recognition. The stuff that works [in artificial intelligence], the stuff that is impressive, is always just pattern recognition. We are always shocked by the ability of machines to make predictions better than people. Steph Curry goes to the free throw line 100 times and makes 93 free throws. Next time, I have a probability model in my head that says he has a 93 percent chance of making that next free throw. That's a powerful, underrated idea, just to translate frequencies from the past into beliefs about the future. It underlies all of this technology and it's just that same simple idea. What’s a technology that you think is overhyped? Artificial intelligence. You take that idea about free throws and apply it to all text that has ever existed, and that’s the idea behind the technology. My problem is that once you call it “artificial intelligence” instead of “pattern recognition,” it conjures dangerous robots that are threatening our existence, or automating human jobs, or beings that are more powerful or more intelligent than we are. That’s never panned out. That narrative has accompanied artificial intelligence since they came up with the term in the 1950s, and it's the same narrative no matter what the technology does. One on hand, you have people like me who think that machine learning technology, or pattern recognition technology, is incredible; transcription services are incredible; handwriting recognition is incredible; coding assistants are incredible. These are incredible tools that make my life better on a daily basis. But you see that we pour all this money into them as if they're going to create some new consciousness or end humanity, or that they’re somehow equivalent to nuclear bombs. It’s just incongruous. What book most shaped your conception of the future? “The Utopia of Rules” by David Graeber. Part of my work is in history; I’m in the process of finishing a project about the history of computing. You can't understand the future without digging into how we got here in the first place. Graeber lays out the technological arc from the end of World War II, to the landing on the moon, and then into our present. He uses that moon landing as the defining point between the “before times” and the “after” times. The point he illustrates is that it’s shocking to look at how much computing technology is bureaucratic technology. If you think about it that way — and you can even think about large language models in this way, as tools for writing boilerplate emails, or summarizing documents, or other things that maybe Graber would later call “bullshit” in his follow-up book — that's what most computing technology exists for. It was actually shocking to me to realize and internalize that when you sit in a computer science department, there are all these flashy demos that you try to impress people with, but most of the money is coming in to support enterprise technology. Enterprise is all bureaucracy. Understanding what the future might look like means understanding how bureaucracy is going to be either continually entwined in our lives as we push forward, or there will have to be some brakes put on it, and I just don't think that's escapable. What could government be doing regarding technology that it isn’t? There are two answers to this. The first is that I do think less is more when it comes to artificial intelligence legislation. I kind of wish they would just stop. There's this bill that's going through the Legislature now in California called SB 1047, and there's just absolutely no need for it. It looks to me like it’s just going to empower people with more resources. My advice to government officials is that if they’re so lax in enforcing antitrust laws, this will continue to do the opposite, making AI companies even more powerful. The second is that the government should think about computing bureaucracies and how to make them less oppressive. Anybody who deals with the federal government knows that the reporting gets worse all the time. Whenever you have to write a report, you're interfacing with some monstrosity of a computing system that we should have fixed, but because it has to go through a government contract to get built you're dealing with something that is probably from 1994. They can't see past that mess, and how we're held down by these stodgy systems. Then they think, well, we'll just throw money at AI safety. What surprised you most this year? The past few days of financial reporting, which seem to suggest that investors are getting a little impatient with companies’ over-investment in AI. It seems like the market wants an AI strategy, but doesn't want people to just be throwing money after it for the sake of throwing money after it. I'm not sure if this will hold up, but honestly, it's a bit encouraging because I think the tech industry has tried to solve all problems by just throwing money and compute at things because they know this is their competitive advantage. It would be surprising if that competitive advantage is running out.
|