In schools, deepfake nudes have no easy answers

How the next wave of technology is upending the global economy and its power structures
May 29, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

A photo illustration shows a teenage girl sitting at a school desk with arms folded, staring into her phone. A facial recognition pattern appears in the background.

Illustration by Erin Aulov/POLITICO (source images via iStock)

The disruption promised by artificial intelligence has already reached America’s schools.

An Indianapolis teacher with two decades of experience told POLITICO’s Dana Nickel that she had to change jobs after an AI-generated deepfake nude picture of her was circulated around her school.

“The way it impacted my career is indescribable,” said Angela Tipton. “I don’t know if I’ll ever be able to stand in front of a classroom in Indianapolis again.”

Dana describes how what was once a rare occurrence has now become epidemic in middle and high schools with the rise of generative AI tools. Congress has dithered in how to stop them, and even though 20 states have passed laws that penalize the spread of nonconsensual deepfakes, their results vary widely.

The burgeoning patchwork of AI-generated laws is a growing concern for governments across the world — there’s a jarring contrast between the dozens of companies and nations that assembled in Seoul last week to set a “global” AI agenda and the ground battle to pass binding AI policy in states like California and Colorado.

Teachers and school leaders say the effects of AI are roiling the classroom, and they demand tailored solutions even as bigger debates roll on. For example, it’s not clear who should be punished for making and circulating deepfakes. When perpetrators are minors, there are even more questions. And there are not clear mandates on reporting deepfake incidents to law enforcement. In short: educators want help, and they want it now.

“We’re pushing lawmakers to update [laws] because most protections were written way before AI-generated media,” Ronn Nozoe, CEO of the National Association of Secondary School Principals, told Dana. “We’re also calling on the Department of Education to develop guidance to help schools navigate these situations.”

The Education Department pointed Dana to AI guidance issued last year, but it doesn’t have specific information about AI-generated deepfakes.

Tipton’s experience prompted Indiana’s Republican Gov. Eric Holcomb to sign a bill in March that expanded the state’s existing “revenge porn” laws to include AI-generated images. The Indianapolis teacher said she also had some success pursuing students who shared deepfakes of her through Title IX law, which bans sex discrimination including sexual harassment in schools that receive federal funding.

A new Title IX rule finalized this year specifies that online sexual harassment includes “nonconsensual distribution of intimate images that have been altered or generated by AI technologies.” The White House’s Task Force to Address Online Harassment and Abuse also published a report this month that says the Education Department will issue “resources, model policies and best practices” around online harassment.

One uncomfortable question cuts to the heart of the ongoing debate around how AI-powered harassment should be regulated: How liable are the actual students who generated the content in the first place? The state of Florida charged two middle school students with felonies for making deepfake nudes in December 2023. A spokesperson for a Washington state district said a school took a case of AI-powered harassment to Child Protective Services but the legal team decided administrators did not have to report fake images to police.

Beyond the students making and circulating the images, experts and powerful industry interests are divided on how useful (or legal) it is to hold the platforms where the images are distributed liable.

“Targeting the creation and solicitation of this imagery would have much more of an impact than targeting distribution alone,” Mary Anne Franks, a George Washington University law professor and cybercrime expert, told Dana, echoing industry voices who told POLITICO over the weekend that federal legislation targeting distributors of AI-generated nonconsensual porn “is likely overbroad and unconstitutional.”

Meanwhile, high school students say their districts could be doing more to crack down on the deepfake phenomenon. Washington high school freshman Caroline Mullet said a fellow student used AI to create nude pictures of her friends. Her father, Democratic state Sen. Mark Mullet, told POLITICO that inspired him to introduce a law to expand criminal penalties under a child sex abuse law to include digitally generated explicit content.

“The boy who did this had the idea that it was OK. … He didn’t take it too seriously,” Caroline Mullett told Dana. “I feel like at the end of the day, that’s his decision to do this … but I do think that the school can be helpful and do a better job of spreading the word.”

message to newsom

California Gov. Gavin Newsom speaks at an event.

California Gov. Gavin Newsom. | Jeff Chiu/AP

California’s crusading Democratic Gov. Gavin Newsom is famously tight-lipped when it comes to pending legislation in his state.

But, in the spirit of DFD’s running feature, POLITICO’s California Playbook posed five questions to him in absentia about AI. Their line of questioning, timed to a summit where Newsom was slated to appear today with AI leaders like Jennifer Chayes and Fei-Fei Li, highlights some of the lingering big-picture questions around California’s ambitious plans for the technology, including:

How can AI regulation coexist with budget constraints? “This year’s multibillion-dollar budget deficit means even bills that the governor might agree with could be put on hold until the coffers are a bit more full,” Lara Korte and Dustin Gardiner write. “Many proposals — notably, state Sen. Scott Wiener’s massive AI framework — would require money to enforce or create new state agencies that need staffing and resources… We’d like to know what the governor sees as the most urgent need for the state to tackle this year.”

What are the actual risks? “There is growing concern among some tech experts that powerful artificial intelligence models could pose huge risks in the hands of bad actors — leading to global catastrophe, or even human extinction,” Lara and Dustin answer. “It’s part of the reason Wiener wants to require large-scale models to undergo safety testing before they deploy. On the other end of the spectrum, those like venture capitalist Marc Andreessen believe AI could bring life-altering benefits to humanity. Does Newsom give any credence to either side? Is he in the middle?”

How can you possibly keep up with the industry? From our California colleagues: “Tech equity advocates and some lawmakers don’t want the private sector to have complete control over the technology. They think that the state should invest in publicly-funded research hubs and university programs to help shape AI development in a responsible way. Does the governor agree?”

from the archives (sort of)

AI could end up revolutionizing the future by rewriting the past — and archivists have something to say about that.

On today’s POLITICO Tech podcast, Steven Overly interviews archival producers Rachel Antell and Stephanie Jenkins, cofounders of the Archival Producers Alliance, about the impact generative AI might have on documentary films. Until now, the industry has been largely dependent on research.

“I’m not sure [documentary] viewers necessarily know that there’s a whole craft of finding material, verifying material, getting it to look as good as possible, and it’s one that’s quite labor intensive,” said Antell. “We fear that in the place of real archival research and real primary resources, that people will try to replace that with generative material.”

Listen to Steven’s interview with Antell and Jenkins on today’s POLITICO Tech.

Tweet of the Day

That Elliott Management letter to Texas Instruments is the perfect encapsulation of the whole domestic manufacturing challenge.Wrote about it for today's @Markets newsletter

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post