Hello, and welcome to today’s edition of the Future in Five Questions. Today I talked to Tamara Kneese, director of the nonprofit Data & Society Research Institute’s Algorithmic Impact Methods Lab and author of Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond. She also worked on Intel’s “Green Software” team, studying the culture of the developers working on decarbonization software. We discussed the return of Luddite thought, modern dystopian sci-fi and why she wishes the term “AI” itself would just go away. An edited and condensed version of the conversation follows: What’s one underrated big idea? The Luddites are kind of having a resurgence right now because of work by journalists like Brian Merchant and my friend Wendy Liu, who made her own “The Luddites Were Right” shirts. Workers are looking at the AI landscape right now and they're very conscious of the fact that management of course is going to use AI as an excuse to lay people off or pay them less, and the question is what workers can do about it. It isn't so much that there's a total refusal to work with technology; there might be ways that technology can in fact be useful to workers, can make their jobs easier and can even enhance their productivity in some ways. But if workers are not reaping the benefits of that increased productivity, and they don't have any say over how the technologies are going to be introduced into their workplace, then that is the problem. The Luddites organized collective action around destroying machines that were looking to destroy their livelihood. And I think that that is very similar to the kinds of debates that we're having right now where the question is over how we ensure that workers have more of a say over how these tools are being introduced into their workplaces, that people's work is not taken up by these systems and that it’s not used to excuse not paying them for their work. What’s a technology you think is overhyped? Climate tech is definitely overhyped. For me it's especially annoying because while I was working at Intel it was the era of the metaverse, and also the crypto boom. There were a lot of conversations about how blockchain and Ethereum in particular could be useful for the environment, and sustainability efforts, and there was a whole burgeoning social movement around this. Meanwhile, I never saw a lot of evidence as to how this technology would actually be useful. I do think that with the climate and AI issue, you have many of the same problems where people can’t point to real results that can be measured about how AI introduced into various contexts is helping people in any way with their sustainability efforts, whether it's deforestation prevention, or monitoring droughts or other things of that nature. The question is whether the technology is really doing the thing that the developers claim it does. The worst part of the climate tech fervor has to do with speculation in the startup space where people are investing in the fantasy of things like carbon capture, or carbon offsets. The blockchain era was very much built on this idea that we're going to tokenize the rainforest in order to save it. Meanwhile, a lot of those companies were, in fact, just engaged in different forms of colonial extraction and taking advantage of indigenous communities and not consulting them at all in these schemes. What book most shaped your conception of the future? One is Simone Browne’s “Dark Matters.” It’s about the histories of surveillance and blackness, and thinking about the relationship between chattel slavery, the infrastructures required for the internet and the origins of predictive policing, which has a very obvious historical connection to the surveillance of enslaved people. There’s a relationship between the past and present when it comes to systems of oppression, but we can also look to the past to find forms of resistance. There is still a tendency within conversations about AI and the future of AI to to completely forget about these deeper histories. I don't think we can really have conversations about the future of technology without looking to histories of colonialism, racial capitalism and enslavement. Another is the novel “Wrong Way” by Joanne McNeil. It's incredibly bleak but it is one of the more realistic stories of a woman working in the tech industry that I've read for quite a while. We often get the perspectives of white collar workers, you have a lot of whistleblower memoirs from women in tech. But this is a sci-fi story from the perspective of a woman who is precarious going from job to job, working as a gig worker and then becoming a driver, which is supposed to be a very prestigious position in the world of the book. The woman has the “great honor” of working for a self-driving car company, that surprise, is not really self-driving. The cars are actually driven by humans who are contorted and squished up into the vehicles, in a very obvious Mechanical Turk metaphor. It isn't a story that ends with a “hooray” collective moment of solidarity and worker triumph. But I do think it offers a look at what kind of horrific AI present or future could take shape if we don't have collective action. What could government be doing regarding technology that it isn’t? I'm part of a lot of different committees right now looking into the problem of stakeholder engagement. There are a lot of calls for and recent policy both in the U.S. but also in Europe, about including more stakeholder engagement in the design and development and assessment process when it comes to AI systems. Part of the problem is that it's still really hard to find rooms where actual workers from different areas or even organizations or union representatives are in the room. It's often researchers, including responsible AI researchers from big tech companies, people like me from civil society, and then academic researchers, who are great, but also fairly disconnected from what it means to actually implement these things. I want to get artists in the room, I want to get journalists in the room. What could be really useful is ensuring that we consider who the experts are and who needs to be brought into conversations about tech policy. What surprised you most in the past year? I have been pleasantly surprised by the attention to the environmental impact of AI. I know many people in my little academic world who have been researching this for many years at this point, and who have been beating the drum pretty hard about the impacts of data centers. This is not new information, and in a lot of ways the entire tech industry is built on incredibly resource-intensive and extractive processes. But we do seem to be reaching a key moment, maybe it is because it's harder for people to ignore the impending and ongoing terrible climate catastrophes that happen sporadically. Looking at our future is a little bit terrifying at times, so maybe there is more of a public push for regulation around things like AI. That's why we endorse the AI environmental impacts act, because even if it's not perfect, at least we're talking about the need for a much more comprehensive way of measuring impacts, not just in terms of carbon cost and water costs, but also downstream impacts on communities and how people are affected by data center noise pollution, or having data centers take all of their drinking water. All these issues are related to AI, which as a term is often used to obfuscate the reality of what the technology is. If we could just get rid of the word AI at this point that would be great.
|