Artificial intelligence has arrived as a Washington policy issue, complete with its own nascent landscape of lobbyists, nonprofits and policy shops. But only one of those nonprofits’ boards includes a pseudonymous X poster known only (until recently) as “Based Beff Jezos.” The Alliance for the Future, launched with an article on X last week, is a 501(c)4 that asserts “stagnation has been the root cause of our greatest national problems,” and proposes unfettered, open source AI development as the solution. It plans for a techno-optimist full-court press that will “inform the media, lawmakers, and other interested parties about the incredible benefits AI can bring to humanity.” I called the new nonprofit’s director, the D.C.-based researcher Brian Chau, to figure out exactly what he thinks the current policy debate around AI is misinformed about, why he’s wary of direct association with the AI-maximalist “effective accelerationist” movement and how to balance the future-making potential of open source AI with the risks that even he and his fellow AI boosters acknowledge. An edited and condensed version of the conversation follows: I found out about this launch when a mutual of mine messaged me and said “There’s an effective accelerationist nonprofit now.” Do you agree with that characterization, and why or why not? Effective accelerationism is not just a set of policy positions. They have positions on the meaning of life, and the grand scale of humanity. I think of myself as more of a practical person, and Alliance for the Future is much more of a practical organization. They're an interesting group, I definitely don't have any ill will towards them, but the scope of their movement is a lot bigger than Alliance for the Future. You wrote in your announcement about regulating AI being fundamentally different than regulating commodity-based policy fields like housing, or energy. What is the difference? What we call AI is a process. I much prefer the term machine learning to AI, but unfortunately, AI just ended up being the more popular term. In the academic sense, AI is a broad category of tools, broader than machine learning, which is a specific type of statistical process that's used in order to create AI products like ChatGPT or DALL-E, or even just to handle logistics and data analysis. Almost all of the time when people are talking about AI, they mean machine learning. This process is almost analogous to just doing statistics or economics research over and over again. So when I see it used in different areas, I worry that people don’t see beyond ChatGPT or large language models in general, but the policies they put in place will ultimately apply to more mundane applications, still very economically impactful, but more covert, or less flashy, applications. Does AFTF argue that the necessary tools to prevent AI harm already exist, and just need to be used by regulators? I look at it from a cost-benefit perspective. It's not worth harming the vast majority of innocent users to go after the bad actors. But if there's new legislation that's written that goes after the bad actors, and negligibly harms the average user, that would be something that I would support or at least not oppose. You support open source AI development, the definition of which is somewhat disputed. How do you define open source, and weigh its risks and benefits? There’s a lot of framing, especially by companies competing against each other, that there's one “real” form of open source. To me, there's no one open source that's so much better than the others, and any company contributing to the open source community, to the extent that they're willing to do so, is a net positive. In terms of what we specifically look to support, I don't specifically have much litigation in my portfolio, but from experts I've spoken to both the code and the data [for open source models] would be protected under the First Amendment. There's more dispute on this, but likely the [training] weights will be as well. Even if the weights are ultimately decided as not protected under the First Amendment, I would still strongly support protecting these companies [legally], and I would strongly oppose anything that would prevent the publication of open weight models. Do you see any similarities between the growth of AI as a policy field and that of crypto — are there AI super PACs in the future? There are a few distinct differences. One is that there's a lot more litigation in crypto, focused on on very specific powers granted to executive agencies such as the Securities and Exchange Commission. It wasn’t about “What are the new laws surrounding crypto,” but “What are the old laws surrounding crypto.” There might be a bit of First Amendment litigation, if there are attempts to regulate open source. Another is that there are a lot more tangible applications of AI. People can see AI in their lives in ways that, at least from my perspective, they can't see crypto in their lives nearly as well, so I think that the political coalition for AI will be a lot broader. Whether someone is left-wing or right-wing, they can still come to an agreement that AI gives practical benefits to their constituents.
|