Hello, and welcome to this week’s installment of The Future In Five Questions. Brendan recently spoke with Audrey Tang, a software programmer who served as Taiwan’s first Minister of Digital Affairs from August 2022 to May 2024. As a self-professed “conservative anarchist,” Tang frequently emphasizes the potential for emerging technologies to break down existing power structures and advance a radically accessible form of democracy. She’s now a senior fellow at the Project Liberty Institute, an effort by billionaire real estate developer Frank McCourt to create “a more people-centric web” (in part by purchasing TikTok, if possible). Tang talks about her skepticism of plans to watermark AI-generated content, how governments and tech platforms can use AI to create constructive online spaces and why Taiwan’s January election was remarkably free of damaging deepfakes and polarizing attacks. The following has been condensed and edited for clarity: What’s one underrated big idea? I’ll say “pro-social media,” which is something that I coined. Unlike today's antisocial social media, which amplifies conflicts for maximum engagement, the idea of pro-social media is a bridging system across social divides. Initially, social media were just reverse-chronologically sorted. But then along came those recommendation algorithms that maximize so-called engagement — and with it the inflammatory headlines, the knee-jerk reactions and everything that replaces thoughtful discussions and diverse perspectives. Now that we have good language models, research has shown that the language models can actually be used as a force for good, by detecting constructiveness and curiosity and context and so on, and applying these as new ranking algorithms. So instead of ranking on engagement, you can rank on how “bridging” a statement is. I've been working on bridging systems such as this for a decade now. It has been shown to effectively depolarize people, because it highlights the unlikely consensus, unlikely common ground, across people who otherwise have very different ideologies. What’s a technology that you think is overhyped? Well, I'll name artificial general intelligence. That's like the standard answer, right? But beyond that — I think the idea that we can use watermarking, or any sort of content provenance or content-layer-based detection to fight deepfakes, to fight this idea of information manipulation toward polarization and election meddling and so on — I think that is quite hyped. It is probably not delivering, at least not for this election. I think very simple things like “prebunking” — telling people that everything is probably deepfaked — is actually easier, and probably more effective, than any of those technological layers. It is people waking up to the fact that everything is an illustration, essentially. Everything is painted. The idea of prebunking is to show people how a deepfake is done, how a polarization attack is done, and just get into people's contextual awareness that there are these kinds of attacks going on. So when they see something that looks like a polarization attack or a deepfake attack or election interference, instead of just jumping to “outrage, share, retweet,” they would say, ‘Oh, I've seen this before.’ I think prebunking, in general, works much better than any of those technological add-ons that you have to ask everybody to install, to check and/or to train themselves. What book most shaped your conception of the future? The book’s called “Plurality,” and it's a collaborative work. More than 60 people worked on it, and it's continuously being updated. It is a living document, free of copyright. It basically does two things. First, it chronicles how, over the past decade, we successfully depolarized Taiwanese society, using the sort of pro-social media techniques that we just talked about. But then it also extrapolates it — with a new generation of newer emerging technologies, and in jurisdictions that are much more in need of such technologies than today's Taiwan, how it may conceivably work in the future. I did a few edits, I maintain part of the website, but most of the writing is by the others. I’m a Taoist, which is one of the oldest anarchism branches, if you can even call it a branch. But the idea is really simple — instead of using copyright, or any of those pre-internet legal instruments trying to kind of force an author's voice, we instead just allow anyone, really, to take a few chapters, if they want, and call it their work. We won't sue them. So this is an experiment to not just describe the ethos of voluntary collaboration, but also to use this ethos to manage the creation of this book — kind of drinking our own champagne. What could government be doing regarding technology that it isn’t? Government can use its own bridging systems, its own pro-social media, for what we call “broad listening,” instead of broadcasting. The idea is that anytime you see a potential new harm, you just very quickly, broadly listen to the entire society, asking them a random selection — like in Taiwan, we sent SMS [text messages] to 200,000 random numbers. And so ask a mini-public or random sample of the people, and get them to kind of deliberate, and then take common ideas and make them into law. Taiwan actually did that. We held the deliberation in March, checked the stakeholders, and then in July, already signed this kind of law that ratifies the common understandings — for example, requiring public key signatures on advertisements that could be impersonating celebrities, and so on. In this kind of broad listening, if you listen to lots and lots of ideas, you can use language models to compress them, with nuance. And then you get, again, bridges that can close much longer divides by surfacing the unlikely common ground. Recently in the [Tokyo] gubernatorial election, there's a candidate that got like 15,000 inputs to his platform. His name was Takahiro Anno, and he used exactly these kinds of broad listening ideas. What surprised you most this year? I was surprised how effectively — just by prebunking and collaborative fact-checking alone — [Taiwan’s] January election was largely free from the side effects of deepfakes and polarization attacks. You can even say that some of them backfired, because people were prebunked and were ready for it. And so when those attacks actually happen, it actually increased our solidarity instead of sowing discord. Everybody was bracing for the generative AI impact, right? Bracing for the bot that works across languages, that can generate novel content, that escapes detection because it has a very convincing profile. And although all of these actually happened, it did not actually affect the election negatively because of the preparation that we did. It was quite surprising to see how well it worked. And I hope that by publishing the kind of work we did, we can help other democracies to also gain this kind of societal resistance.
|