Social media warning labels come to Washington

How the next wave of technology is upending the global economy and its power structures
Sep 26, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Christine Mui

With help from Daniella Cheslow and Derek Robertson

Shreya Nallamothu looks at her phone in Bloomington, Ill., on Tuesday, May 9, 2023. Illinois lawmakers aim to make their state what they say will be the first in the country to create protections for child social media influencers. Nallamothu, 15, raised her concerns to Illinois state Sen. David Koehler of Peoria, who then set the legislation in motion. (AP Photo/Claire Savage)

A social media feed on a smartphone. | AP

An idea percolating all summer in the big national argument about social media — warning labels to help reduce the harms of online platforms to kids — has suddenly landed in Congress.

On Tuesday, Sens. Katie Britt (R-Ala.) and John Fetterman (D-Pa.) introduced a bill requiring platforms to add those labels. They envisioned a pop-up box appearing every time users log on to access the platform, asking them to acknowledge the potential mental health risks before they’re allowed to scroll, post or chat. The labels, which would be developed by the U.S. surgeon general and FTC, would also link to mental health resources.

Warning labels are an old idea from the physical world — think cigarette packs, electrical cables and hard seltzers — that health advocates have been trying to revive for the virtual one.

With no new national policies on the books to police children’s safety online, and even the existing state laws stuck in court, the simple, time-tested idea of a label is seeming more and more current.

Just one problem. It's not clear if they work.

The new Senate bill is specifically a response to an op-ed that Surgeon General Vivek Murthy published in June, declaring social media’s addictive algorithms a health risk to youth and making the case for tobacco-style warning labels. (Tech industry groups said his idea infringes on free speech, and prominent tech CEOs have rejected that social media causes worse mental health in young people.)

Murthy’s push galvanized kids safety advocates, but his job doesn’t come with the power to unilaterally issue warnings, so he could only plead with Congress to do something about it.

It didn’t take long for warning labels to take off as a new target for policymakers. There’s a clear appeal: A warning label is a way to cut through years of complicated debates over restrictions, bans and the myriad tradeoffs that come into play when you try to regulate giant forums for public speech.

Earlier this month, a bipartisan coalition of 42 state attorneys general rallied behind Murthy’s cause and sent Capitol Hill a reminder to act. “It really is the defining issue right now for our children, and it's important that we do something,” Britt said about her bill in a Fox News interview. “Doing nothing’s not an option.”

One important question that has taken a back seat in the debate is whether warning labels will actually… you know, work.

Cigarette packs have been the prime comparison point. Murthy highlighted tobacco studies that showed warning labels can boost awareness and change behavior.

But would kids, or their parents, feel any differently about TikTok or Instagram if they had to click through a box that warned them it might hurt their mental health?

Aileen Nielsen, a visiting professor at Harvard Law School who has researched warning labeling, criticized Murthy’s recommendation for lacking teeth. She compared it to cookie consent boxes and so-called state “zombie laws” that mandate the disclosure of synthetic digital content, but aren’t always widely enforced.

Another issue, she said, is that even if platforms comply, there’s no guarantee kids will use social media less — or that it will improve their mental health.

Rachel Rodgers, an applied psychology professor at Northeastern University, looked at labels on altered photos and discovered they failed to reduce the negative body image and eating disorder effects of being exposed to idealized images. She noted there’s symbolic value in labeling something risky, since it explicitly connects the product with the idea of harm, but that the actual mechanism for reducing risk is unclear.

The Senate bill’s requirement of “adding pop ups to resources around mental health will have few negative effects and may be helpful,” she said. But she caveated: “introducing policy is an onerous task, and then introducing policy where you're not certain it will actually be helpful is perhaps not a firm path to be pursuing.”

Certainly, there are logistical reasons the proposal is appealing, though they also serve as something of a caution. Nielsen calls warning labels “very good political theater.”

“It's something that will probably pass First Amendment muster. It's something that because it's ineffective, is unlikely to irritate large stakeholders like large tech firms, and yet it's something that people also find morally powerful,” she said.

The idea of disclosing risk has also surfaced in the debate over TikTok — ironically, pitched by the company itself as a way to avoid shutting down the platform. While defending TikTok before federal judges, a lawyer for creators piled on and suggested an alternative to the law forcing it to sell or face a ban: perhaps a Surgeon General’s warning label noting that its feed might be influenced by Chinese government officials would suffice? (The three-judge panel appeared highly skeptical.)

Child safety advocates say they support the Senate bill, but warn it shouldn’t substitute for legislation like the Kids Online Safety Act, which they say puts more onus on platforms to make changes and uphold a duty of care to their users.

Observers don’t expect the bill to move on its own in the current Congress, but optimists speculated that it could be hitched to a bigger piece of must-pass legislation.

Alix Fraser, director of the Council for Responsible Social Media at the nonprofit Issue One, also suggested warning labels might help address other tech policy issues. Generative AI tools, for instance, could be required to disclose their information may not be fully accurate. (ChatGPT already comes with that warning.)

“I absolutely think that seeing warning labels for other things would be positive in the technology space,” he said. “That would be a great tool in the tool belt. But none of these things should supplant the bigger fixes.”

AI <3 NUCLEAR

AI companies pledged allegiance to the atom Thursday at a D.C. panel where reps from OpenAI, Nvidia, Anthropic and AMD said they will need clean energy to keep the U.S. ahead of China — and nuclear seemed to be the best answer.

“Interest in nuclear is at an all-time high,” said Thomas Zacharia, senior vice president of public policy at AMD, at a panel convened by the Strategic Competitive Studies Project.

“We see this as an opportunity for the United States to drive forward clean energy solutions and unlock clean energy that can help power the future of these models,” said Rachel Appleton, a lobbyist at Anthropic.

At a moment when Microsoft has inked a deal to buy power from the shuttered Three Mile Island, the positive vibes around nuclear energy seem to go through the whole AI stack — from the highest profile companies building the models and chips to the fairly obscure firms running data centers. Just one small problem: Nuclear energy still has a public-relations challenge, one that executives seem to consider a nuisance rather than a substantive issue.

Charles Meyers, executive chair of the data center company Equinix (“many people still think I work for a gym”), touted his firm’s April deal with Sam Altman’s Oklo to procure up to 500MW of nuclear energy later in the same event.

”Unfortunately, as we all know, nuclear has a brand problem, one that is not really necessarily rooted in a well informed view of the facts and the current state of technology,” Meyers said. — Daniella Cheslow

u.k. hits the snooze on ai

After florid warnings of electoral chaos sown by AI, the United Kingdom’s recent elections were in the end uneventful on that front.

POLITICO’s John Johnston reported for Pro subscribers that election watchers said that the Labour Party had such a massive polling lead for so long that the vote may have been “just too boring to deepfake.”

“The consensus seems to have been that this was a boring election,” said Ales Cap, a researcher on deepfakes and elections at University College London. “Everybody kind of knew what the result was going to be. Malicious foreign actors would have very little interest in trying to amplify disruptive narratives.”

Still, a few crude deepfaked clips did break through to the general consciousness: Marcus Beard, a former No.10 Downing Street communications adviser, told John he tracked an uptick in searches for Labour health spokesperson Wes Streeting and Palestine after a deepfake of Streeting speaking disparagingly about Palestinians made the rounds. — Derek Robertson

TWEET OF THE DAY

never studying os again

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post