Hello, and welcome to today’s edition of the Future in Five Questions. Steven interviewed Elizabeth Kelly, the director of the U.S. AI Safety Institute, an agency the Commerce Department set up roughly six months ago to help implement President Joe Biden’s executive order on artificial intelligence. She discussed why safety is critical to AI’s future and where existing technologies fall short, as well as how global governments are “making sure that AI works for humans.” You can hear more from Kelly on today’s episode of the POLITICO Tech podcast. This conversation has been edited and condensed for clarity: What’s one underrated big idea? Safety is essential to, and not at odds with, fueling breakthrough technological innovation. Without safety, consumers will not trust new technology or let it into their lives, and a technology like AI can't go from a novelty to a driver of the economy. Safety fosters trust. Trust drives consumer confidence and fuels adoption. Adoption accelerates further innovation. Just today, we actually released a set of draft guidelines for how developers and foundation models can protect against misuse of their models and mitigate possible harms. What’s a technology that you think is overhyped? Many of the technologies that are intended to make AI safe to deploy are overhyped at present. There's growing evidence that current technological approaches to AI safety are not sufficient to address the wide variety of risks that are posed by advanced generative AI models. We know that current safeguards for AI models need more work if we're going to rely on them to protect people from harm. For example, right now, jailbreaks are too easy, and models can be fine tuned to remove safety measures. Watermarks for AI-generated content are also easily removed. The good news is that we're seeing research to address issues like content provenance, brittle safeguards and toxic outputs, but we need major investments in these areas. What book most shaped your conception of the future? I am a big fan of Power and Progress by leading MIT economists Daron Acemoglu and Simon Johnson. It's a powerful new book that charts how over the past 1,000 years, technological innovations have been the key to prosperity and how, for the societal benefits to be unlocked, technical power needs to be guided by social power. They illustrate time and time again how progress is not and has never been automatic or destined to occur, and how humans always have the agency to choose how to steer technological progress. For me, the book really drives home the huge opportunity we have to shape AI in a way that works for people. That lets us realize the promise of enormous socially transformative benefits like individualized education, new drug discovery and development, carbon capture and storage. And the huge responsibility we have to protect people from its harms, both the here-and-now harms like discrimination, violations of privacy or the creation of synthetic content that depict events that never happened, as well as the potential future risk to public safety and national security. And I think it's fitting, because the book really drives home that democracy itself is a kind of technological innovation that we're going to need for the future, so we can better coordinate and cooperate in the face of the growing complexity and challenge that AI will surely bring. What could government be doing regarding technology that it isn’t? With President Biden's executive order, I really think we used every lever at our disposal to harness the benefits and mitigate the risk of AI. And here at the Safety Institute, and all across the government, we've been heads down over the last 270 days fulfilling the ambitious agenda that was set out by the president. And we've made real progress. All of that being said, there's only so much the administration can do alone. Congress has a critical role to play here to help our nation meet the moment on AI, like passing comprehensive privacy legislation or making the required investments that we need to see in talent, R&D, safety and more. The president and Commerce Secretary Gina Raimondo have been clear that Congress needs to act, and it's heartening to see the bipartisan discussions that are already underway. What surprised you most this year? Obviously generative AI has made huge advances over the last two years, but, for me, perhaps the most surprising thing has been the speed at which governments, domestic and especially international, have grasped what this technology means for our future and moved quickly to respond. Governments are not known for moving particularly quickly, but we've already seen the global community really come together and take significant steps over the course of the last year. There's a lot of impacts I could point to. I think one that comes to mind is making sure that we're building in safety on the front end, which I think is actually going to prevent a lot of the misuse that we could see, and prevent harms that could stifle innovation down the line. Early engagement is also going to be key to making sure the benefits of AI are widely shared [and] that we're driving investment towards the socially beneficial use cases in education, health care, climate, and that we're really centering people and making sure that AI works for humans.
|