Hello, and welcome to this week’s installment of the Future in Five Questions. This week I spoke with Duane Pozza, a former Federal Trade Commission and Consumer Financial Protection Bureau official who is now a partner at Wiley Rein LLP working on legal issues related to emerging tech. We talked about why he thinks everyone involved in tech policy debates should approach them with a good deal of humility, the enduring wisdom of “Moneyball” and the pitfalls of states as AI policy laboratories. An edited and condensed version of our conversation follows: What’s one underrated big idea? Humility about where technology is headed. When I look at the AI regulatory landscape, I see policymakers who jump to the worst case scenarios, and view technologies like AI through the lens of what could go wrong, and then they try to aim [regulation] at that. That means that if you don't proceed carefully, you can stifle its beneficial uses. I don't think that being sort of humble about a regulatory approach means there shouldn't be regulation at all. I worked in the government for six and a half years. But it takes basically an approach or mindset that allows for the benefits of new technology to flourish, builds in some flexibility and is adaptive, instead of presuming to know all the answers up front about how new technology should be regulated. This could be a learning process. What’s a technology that you think is overhyped? Artificial general intelligence or otherwise fully autonomous AI. I'm not a technologist, so I can't weigh in on the technical side of how close we are to developing this. But I do worry that too much of AI policy discussions are driven by the potential of AI to either save us or destroy us, right? These are big existential questions. What’s more realistic is a slow and steady deployment of AI for more narrow uses. I wouldn't say that there's no role for worrying about the tail end risks — it's important that somebody thinks about them, and has a risk-based approach to dealing with them. But I do think that it drives the discussion a bit more than it should when there's a lot to talk about around AI without getting into those big-picture questions. What book most shaped your conception of the future? “Moneyball” by Michael Lewis. A key part of the book is about finding efficiencies using data. It's a book about baseball players, but it's also a book about the ways in which the Oakland Athletics’ management identified players with undervalued traits using data. At the time this was an unconventional approach. But their use of new technology changed the way baseball operations have developed over the last 20 years, to the point where “moneyball” has become a kind of shorthand. You can see the arc of how the introduction of that approach, and data, and technology shaped this industry. Disruptors or innovators can take advantage of new models and get ahead for a while, and then everyone catches up. There have been plenty of startups that have gotten big using innovative data-driven approaches, and over time they either become incumbents or merge or partner with incumbents. What could government be doing regarding technology that it isn’t? In many areas it could be more clear about expectations. I don't mean prescriptive regulations, I mean expectations about how folks deploying new technologies should or shouldn't be acting, or potential issues they should be paying attention to. We'll see how this goes in AI. I would look at crypto and digital assets as a place where this has gone wrong, and it's been a mess. Federal regulators have been ambivalent about how securities laws apply to digital assets, and there's been a lot of litigation around whether they’re securities or not. There could have been more of a framework. It didn't have to be super prescriptive, but there could have been a framework that gave more predictability for market participants who wanted to use this technology to innovate. People have different views about the benefits of the technology, but I think it's pretty clear that the best way to encourage beneficial uses is not this period of uncertainty. What surprised you most this year? The rush to regulate AI, in particular by the states. A few years back, there was a big debate about whether or not states should be enacting their own privacy laws because of the potential for them to create a regulatory patchwork. That’s a pretty serious concern, with California having obviously led the charge. Now on the privacy side, the dam is pretty much broken with multiple states adopting laws. It’s a little surprising that they are now dipping their toes into AI regulation, as no big bills have passed. This is not necessarily something that's going to happen immediately, but I do think that this ties back to my previous point about humility. AI is the kind of technology that cries out for a uniform approach for whatever rules of the road or regulations there are going to be.
|