European negotiators finally reached a deal on the text of the European Union’s proposed AI Act on Friday, meeting a self-imposed deadline that felt a little bit more urgent than usual given the blistering pace of the technology’s development. But if you’ve ever tried to follow European lawmaking, you know that doesn’t mean the bill is done. For all the news coverage and all the white papers on the EU law, it’s still an open question how much Europe is likely to influence AI going forward. The continent’s laws are shaped in part by the bureaucrats who draft them, and in part by the national governments that have to implement them and sell them to voters. POLITICO’s Gian Volpicelli, who’s followed the legislation closely since its proposal in 2021, reported yesterday for Pro subscribers on how continuing tweaks to the AI Act could still affect crucial issues like surveillance, facial recognition and the powerful generative models that drive systems like ChatGPT. (In fact, the law was first drafted before ChatGPT even existed, which caused a particularly major snag in the last phase of negotiations.) With that in mind, I called Gian this morning to put the deal around the AI Act in a wider context — to discuss how we got here; who pushed for which policy outcomes and why; and what it could mean to live in a world where the technological frontier for AI has different rules depending on where you live. The following conversation is edited and condensed for clarity: What were the main sticking points of the negotiation around the AI Act? There were two main issues where the negotiators were at odds. The first was whether, and how, to regulate foundation models — the scaffolding for powerful, general-purpose, task-agnostic AIs like ChatGPT or Bard. When the AI Act was first conceived in 2021, the European Commission proposed a “risk-based” approach, meaning you only regulate AI when it is used in certain scenarios we consider risky, such as education, the workplace, or critical infrastructure. Models like ChatGPT are so powerful, and so versatile, that it’s not clear how you would regulate them in this risk-based way. But politically, members of the European Parliament who are up for re-election in June 2024 cannot go in front of potential voters and say “The bill I worked on does not apply to ChatGPT.” There was also an economic argument about regulating these models, with European governments, especially France, Italy, and Germany against doing so because they think that it would hamper innovation. The second issue was AI in law enforcement, and specifically facial recognition, which the European Parliament wants to ban completely. Individual governments, of course, wanted to retain the possibility to use these tools for law enforcement. How did they resolve the dispute around foundation models? The very term “foundation model” was eventually dropped and replaced with “general-purpose AI with systemic risk.” There are several proposed ways of adjudicating whether a general purpose AI system will pose a systemic risk, one of which is the amount of compute they wield, the other of which is how many domains they can be used in. Another proposed measure is number of users, business users specifically, as well as how good the system is at certain tasks — if a system is very good at one task, it can potentially be classified as a systemic risk to that area. What has been the response from European AI companies like France's Mistral, which pushed back against more restrictions around “foundation models"? The general sentiment coming out of France is that it’s not really over. The final text has not really been finalized; this is a political agreement. There’s agreement on most points, but the devil is in the details. If you ask the Spaniards [Spain currently holds the presidency of the European Council — ed.], they'll tell you that everything is essentially decided and that 99 percent of the text is more or less agreed upon. But if you turn to Paris and listen, you’ll hear a symphony of backlash coming all the way from President [Emmanuel] Macron saying we will keep working to make sure that innovation isn’t harmed. I don't think there is a lot of enthusiasm coming from these quarters, but it's a far better outcome for them than the original proposal from Parliament and the Spanish-led European Council, which required a much heftier list of obligations for what were then called foundation models. What are some of the significant details that remain to be hammered out in the text? We know that the final round of negotiation mostly focused on general purpose AI, law enforcement and national security, and various bans and prohibitions. So that's probably where there’s wording still missing, but we don’t have any visibility on what is to be added in continuing “technical meetings.” In some capitals there’s still uncertainty about what is actually being inked. What impact is the law expected to have on American AI companies and their relationship with the European market? American companies might be a bit more cautious before releasing some of their more cutting edge products in the EU. When I think about the fact that Claude, Anthropic’s chatbot, is not available in the EU yet, that’s the kind of thing that comes to mind. Some European companies have also raised questions about investors slashing investment in European companies. If talent can be moved to the U.S., it probably will. The problem is for countries such as France hoping to lure back talent from Silicon Valley. The founders of Mistral all hail from Silicon Valley companies, and there could be a bit of a counterintuitive reverse shuffle if they were to flee the continent and go back.
|