Happy Friday! Glad to be back as your host this week. Today, we’re talking about the hottest tech story right now, AI, and some of its consequences that disproportionately affect women. Send me your feedback: sgardner@politico.com. Sen. Richard Blumenthal (D-Conn.) made headlines on Tuesday when he used ChatGPT to write his opening statements for a Senate hearing on generative AI tools, to prove just how advanced the technology has become. The hearing was thorough, with eager witnesses, including OpenAI CEO Sam Altman, offering three hours of testimony on an array of hot topics: compensation for artists that AI replicates, chatbot language inclusivity and how to protect minors using the technology. But the hearing was less thorough on one issue: discussion of the dangers that AI poses specifically towards women. Blumenthal plans to rectify that omission in upcoming hearings (Tuesday’s was the first of several, according to the Senator): “We're going to have witnesses who focus on harassment of women,” and bias problems associated with the technology, he told Women Rule. The problems that Blumenthal mentioned have been gaining traction in the media in recent months. And it’s a complex topic, because there’s a plurality of ways that AI could prove harmful to women –and the risks are expanding with technology. For one, the introduction of more advanced and widely available AI has worsened the threat of women’s images being falsely imposed onto photos or videos of other people – known as deepfakes – and depicted in compromising or violent situations. This form of harassment is particularly potent for women in the public eye, whose photos are widely available online. “We're already seeing this coordinated, chronic online abuse that's experienced by women in politics and women in journalism,” said Gina Neff, executive Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge. “What the new AI tools allow for is an amping up at unprecedented speed, scope and scale of that chronic abuse that women are already facing.” That “unprecedented speed” is what’s worsening deepfake abuse, according to Kristen Lorene Zaleski, Director of Forensic Mental Health at Keck Human Rights Clinic. While deepfakes are not new, they were, until recently, time consuming and technologically complex to create. “Now, you can do it for a couple of dollars, or even on some apps and programs for free,” Zaleski told Women Rule. Zaleski is also a licensed clinical social worker, and she’s worked with several clients who have been targets of deepfake porn. She’s noticed a societal lack of knowledge about the issue, which can have devastating effects for victims. One of her clients, who was the target of a deepfake video that was spread online, lost her job as a result. “It's not talked about, so people that don't specialize in this don't necessarily see it as an issue or understand it,” Zaleski said. Past reports have shown that the vast majority of deepfakes online are pornographic videos that target women. Other countries, like Australia and the United Kingdom, already have or are working to pass legislation that would criminalize this type of abuse. Last year, President Joe Biden signed into law the Violence Against Women Reauthorization Act, which made it possible for victims to sue for civil penalties over the nonconsensual disclosure of their intimate image, but did not extend those same protections to those affected by deepfakes, according to Rep. Joe Morelle (D-N.Y.) Morelle introduced legislation intended to mitigate the spread of harmful deepfakes earlier this month, but it has yet to see any action. “The vastly increasing number of deepfakes are one of the reasons why we need oversight and regulation,” Blumenthal told Women Rule. “There are ways to stop people putting public figures' faces on pornographic stars. There are ways to restrict images that are clearly intended for harassment.” While deepfakes pose an increasingly prevalent problem, there are also other AI-related risks facing women. Because AI relies on the endless wasteland of internet content to learn from and replicate, it runs the risk of reproducing and perpetuating societal biases. One real world example? The over-sexualization of run-of-the-mill photos of women. In their research for an article for the Guardian, “Zero to AI: A Non-technical, Hype-free Guide to Prospering in the AI Era” co-author Gianluca Mauro and New York University Journalism Professor Hilke Schellmann saw this bias problem first hand. They found that AI algorithms used by social media platforms, including Instagram and LinkedIn, decide what content to promote and what content to suppress partially based on how sexually suggestive it deems a photo to be. (The more sexually suggestive, the less likely it will be seen.) But when the program analyzed comparable pictures of men and women in underwear, it identified the photos of women to be much more sexually suggestive. It also flagged photos of pregnant women as very sexually suggestive. As a result, posts featuring pictures of women may be more likely to be shadowbanned. “Philosophically, it means that women's bodies are considered sexual and therefore should not be seen,” Mauro told Women Rule. AI’s have also reflected societal bias in terms of hiring — Amazon had to scrap a recruiting tool that was intended to streamline their recruiting process after it became clear that the AI preferred male candidates. And disparities in available health data could make medical apps that use AI less accurate for women. “It's kind of a cop out for a lot of companies to just say, ‘well, we have societal bias against women in society, so of course that will be reflected in the algorithm,’” Schellmann said. “It's like, ‘No, no, no. We're trying to build technology to make the world better, not worse, and not to replicate the problems that we already have.’”
|