The disruption promised by artificial intelligence has already reached America’s schools. An Indianapolis teacher with two decades of experience told POLITICO’s Dana Nickel that she had to change jobs after an AI-generated deepfake nude picture of her was circulated around her school. “The way it impacted my career is indescribable,” said Angela Tipton. “I don’t know if I’ll ever be able to stand in front of a classroom in Indianapolis again.” Dana describes how what was once a rare occurrence has now become epidemic in middle and high schools with the rise of generative AI tools. Congress has dithered in how to stop them, and even though 20 states have passed laws that penalize the spread of nonconsensual deepfakes, their results vary widely. The burgeoning patchwork of AI-generated laws is a growing concern for governments across the world — there’s a jarring contrast between the dozens of companies and nations that assembled in Seoul last week to set a “global” AI agenda and the ground battle to pass binding AI policy in states like California and Colorado. Teachers and school leaders say the effects of AI are roiling the classroom, and they demand tailored solutions even as bigger debates roll on. For example, it’s not clear who should be punished for making and circulating deepfakes. When perpetrators are minors, there are even more questions. And there are not clear mandates on reporting deepfake incidents to law enforcement. In short: educators want help, and they want it now. “We’re pushing lawmakers to update [laws] because most protections were written way before AI-generated media,” Ronn Nozoe, CEO of the National Association of Secondary School Principals, told Dana. “We’re also calling on the Department of Education to develop guidance to help schools navigate these situations.” The Education Department pointed Dana to AI guidance issued last year, but it doesn’t have specific information about AI-generated deepfakes. Tipton’s experience prompted Indiana’s Republican Gov. Eric Holcomb to sign a bill in March that expanded the state’s existing “revenge porn” laws to include AI-generated images. The Indianapolis teacher said she also had some success pursuing students who shared deepfakes of her through Title IX law, which bans sex discrimination including sexual harassment in schools that receive federal funding. A new Title IX rule finalized this year specifies that online sexual harassment includes “nonconsensual distribution of intimate images that have been altered or generated by AI technologies.” The White House’s Task Force to Address Online Harassment and Abuse also published a report this month that says the Education Department will issue “resources, model policies and best practices” around online harassment. One uncomfortable question cuts to the heart of the ongoing debate around how AI-powered harassment should be regulated: How liable are the actual students who generated the content in the first place? The state of Florida charged two middle school students with felonies for making deepfake nudes in December 2023. A spokesperson for a Washington state district said a school took a case of AI-powered harassment to Child Protective Services but the legal team decided administrators did not have to report fake images to police. Beyond the students making and circulating the images, experts and powerful industry interests are divided on how useful (or legal) it is to hold the platforms where the images are distributed liable. “Targeting the creation and solicitation of this imagery would have much more of an impact than targeting distribution alone,” Mary Anne Franks, a George Washington University law professor and cybercrime expert, told Dana, echoing industry voices who told POLITICO over the weekend that federal legislation targeting distributors of AI-generated nonconsensual porn “is likely overbroad and unconstitutional.” Meanwhile, high school students say their districts could be doing more to crack down on the deepfake phenomenon. Washington high school freshman Caroline Mullet said a fellow student used AI to create nude pictures of her friends. Her father, Democratic state Sen. Mark Mullet, told POLITICO that inspired him to introduce a law to expand criminal penalties under a child sex abuse law to include digitally generated explicit content. “The boy who did this had the idea that it was OK. … He didn’t take it too seriously,” Caroline Mullett told Dana. “I feel like at the end of the day, that’s his decision to do this … but I do think that the school can be helpful and do a better job of spreading the word.”
|