Hello, and welcome to this week’s installment of the Future in Five Questions. This week I interviewed Vilas Dhar, an artificial intelligence researcher, president of the AI-focused philanthropic outfit the Patrick J. McGovern Foundation, and member of the United Nations’ High-level Advisory Body on AI. Dhar took the occasion of this week’s U.N. General Assembly to discuss with us his vision for global governance of AI, the fundamental changes it will bring to how we relate to technology, and why AI should be provided as a public service. An edited and condensed version of the conversation follows: What’s one underrated big idea? Global governance of AI. We've done the work to demonstrate why AI requires cross-border regulation; how AI is being experienced by people is not constrained by boundaries or nation-states. We need some sort of consensus, multilateral approach to it. The AI Advisory Body’s report makes concrete recommendations about what we need to put in place to begin to regulate AI as a global good, from scientifically qualified, credentialed advisory bodies to global investment in AI capacity. Like all good ideas when their time has come, people pay attention and start fighting against them. We're seeing in the global discourse how people are trying to now push back against global governance, whether it's domestic entities that just don't want to give over power to the UN, or it's states that are pushing for authoritarian reasons against having any kind of accountability for their actions in a global construct. What’s a technology that you think is overhyped? I think there’s an intention to scare people into thinking that AI is going to be part of every interaction between people and their lived environments. I'll just be honest with you, I don't need to talk to my refrigerator or my microwave every day. I don't need AI to be a part of every relationship I have with technology. I think the fact that consumer goods companies are trying to figure out how to integrate AI, that every service provider is trying to figure it out, is a good thing generally for innovation, but I don't think that we as consumers need to agree to have AI in every relationship we have. What book most shaped your conception of the future? I wanted to give two different answers to this. One is Thomas Kuhn’s “The Structure of Scientific Revolutions.” The book shifts the model of scientific inquiry from one that's linear and cumulative, this idea that we take what happened last and then we'll make an improvement on it, and actually redefines it to what we are experiencing today with AI, which is the idea of paradigmatic shifts that fundamentally change our understanding of what's possible. I'm a computer scientist, I've worked in AI for two-plus decades, and the shifts that AI is creating to me are not about the quality of the next foundation model, or the number of parameters. It is about how AI tools are going to fundamentally change our expectations of social, economic and political constructs. We can begin to attack basic premises, like the idea that supply and demand intersect at an optimal frontier, because AI-driven automation changes the entire supply side of that equation. We're going to have to come up with a new economic model for how our society functions, and that's not going to happen through linear and iterative processes. It means that in our lifetime, we will have a transformative shift in foundational assumptions about the world we live in. The other book is Hermann Hesse’s “Siddhartha,” which takes us to a place of introspective redetermination when the world changes around us. If we accept the assumption that AI will change things in really meaningful ways, books like “Siddhartha” force us to ask the question, what do we decide to become as individuals when the world we operate in changes dramatically around us? Where do we find our source of meaning and truth? What is it that anchors us in our human identity? What could the government be doing regarding technology that it isn’t? The challenge that I face every day when I talk to communities all across the country, all across the world, is that we've all given up this sense that we have agency in making decisions about technology. People feel that technology decisions are made by technology companies, and they're responded to and constrained by governments. I think that's a terrible way to think about how we make technology decisions. Governments should continue to do what they do in terms of regulating the actions of technology companies, but I think governments need to own the mantle of also being the primary decision makers about our technological future. I think they need to invest significantly in building capacity around these tools, which means investing in educational systems and skilling services, but also in things like building public access data sets and public compute resources. In the UN report that we issued Thursday, we made a call for a global fund for AI that invests in local capacity building. Governments aren't doing what I think their mandate should be, which is moving public funding to develop public capacity around AI. If you think about the really aspirational elements of science and technology policy, and you think about the space program, the government was instrumental in creating an aspirational vision for what a space program meant for America. I don't think we've had that yet around AI. Instead of worrying about what tech companies say will be the future of AI and trying to regulate against it, we should be investing in a new vision of what positive AI looks like, and building public support for that. What has surprised you the most this year? The pace and speed by which communities and nonprofit organizations, entities that are as far away from the technical reality of AI wonkiness as possible, are coming forward to step in with real opinions and viewpoints about how these tools might affect their interests and what they should do to respond. I'm seeing nonprofit organizations embrace AI to build solutions that will quickly scale, that will address the most significant vulnerabilities that their communities face. At the same time, nonprofits that stayed out of debates about the internet and about social media, are now stepping in to say they have a point of view on the data sovereignty of the communities we support. We have a point of view on ensuring that AI systems aren't used in ways that advance authoritarian interests and surveillance. We have an interest in ensuring economic opportunity for our populations. It's happened in really 12 months, 24 months, a massive flowering of interest, engagement, expertise and development. That gives me a lot of hope that if we didn't do so great with the formation of the internet to ensure equitable outcomes, and we definitely didn't do so great with social media, that we have realized AI decisions have to be made by all of us.
|