An idea percolating all summer in the big national argument about social media — warning labels to help reduce the harms of online platforms to kids — has suddenly landed in Congress. On Tuesday, Sens. Katie Britt (R-Ala.) and John Fetterman (D-Pa.) introduced a bill requiring platforms to add those labels. They envisioned a pop-up box appearing every time users log on to access the platform, asking them to acknowledge the potential mental health risks before they’re allowed to scroll, post or chat. The labels, which would be developed by the U.S. surgeon general and FTC, would also link to mental health resources. Warning labels are an old idea from the physical world — think cigarette packs, electrical cables and hard seltzers — that health advocates have been trying to revive for the virtual one. With no new national policies on the books to police children’s safety online, and even the existing state laws stuck in court, the simple, time-tested idea of a label is seeming more and more current. Just one problem. It's not clear if they work. The new Senate bill is specifically a response to an op-ed that Surgeon General Vivek Murthy published in June, declaring social media’s addictive algorithms a health risk to youth and making the case for tobacco-style warning labels. (Tech industry groups said his idea infringes on free speech, and prominent tech CEOs have rejected that social media causes worse mental health in young people.) Murthy’s push galvanized kids safety advocates, but his job doesn’t come with the power to unilaterally issue warnings, so he could only plead with Congress to do something about it. It didn’t take long for warning labels to take off as a new target for policymakers. There’s a clear appeal: A warning label is a way to cut through years of complicated debates over restrictions, bans and the myriad tradeoffs that come into play when you try to regulate giant forums for public speech. Earlier this month, a bipartisan coalition of 42 state attorneys general rallied behind Murthy’s cause and sent Capitol Hill a reminder to act. “It really is the defining issue right now for our children, and it's important that we do something,” Britt said about her bill in a Fox News interview. “Doing nothing’s not an option.” One important question that has taken a back seat in the debate is whether warning labels will actually… you know, work. Cigarette packs have been the prime comparison point. Murthy highlighted tobacco studies that showed warning labels can boost awareness and change behavior. But would kids, or their parents, feel any differently about TikTok or Instagram if they had to click through a box that warned them it might hurt their mental health? Aileen Nielsen, a visiting professor at Harvard Law School who has researched warning labeling, criticized Murthy’s recommendation for lacking teeth. She compared it to cookie consent boxes and so-called state “zombie laws” that mandate the disclosure of synthetic digital content, but aren’t always widely enforced. Another issue, she said, is that even if platforms comply, there’s no guarantee kids will use social media less — or that it will improve their mental health. Rachel Rodgers, an applied psychology professor at Northeastern University, looked at labels on altered photos and discovered they failed to reduce the negative body image and eating disorder effects of being exposed to idealized images. She noted there’s symbolic value in labeling something risky, since it explicitly connects the product with the idea of harm, but that the actual mechanism for reducing risk is unclear. The Senate bill’s requirement of “adding pop ups to resources around mental health will have few negative effects and may be helpful,” she said. But she caveated: “introducing policy is an onerous task, and then introducing policy where you're not certain it will actually be helpful is perhaps not a firm path to be pursuing.” Certainly, there are logistical reasons the proposal is appealing, though they also serve as something of a caution. Nielsen calls warning labels “very good political theater.” “It's something that will probably pass First Amendment muster. It's something that because it's ineffective, is unlikely to irritate large stakeholders like large tech firms, and yet it's something that people also find morally powerful,” she said. The idea of disclosing risk has also surfaced in the debate over TikTok — ironically, pitched by the company itself as a way to avoid shutting down the platform. While defending TikTok before federal judges, a lawyer for creators piled on and suggested an alternative to the law forcing it to sell or face a ban: perhaps a Surgeon General’s warning label noting that its feed might be influenced by Chinese government officials would suffice? (The three-judge panel appeared highly skeptical.) Child safety advocates say they support the Senate bill, but warn it shouldn’t substitute for legislation like the Kids Online Safety Act, which they say puts more onus on platforms to make changes and uphold a duty of care to their users. Observers don’t expect the bill to move on its own in the current Congress, but optimists speculated that it could be hitched to a bigger piece of must-pass legislation. Alix Fraser, director of the Council for Responsible Social Media at the nonprofit Issue One, also suggested warning labels might help address other tech policy issues. Generative AI tools, for instance, could be required to disclose their information may not be fully accurate. (ChatGPT already comes with that warning.) “I absolutely think that seeing warning labels for other things would be positive in the technology space,” he said. “That would be a great tool in the tool belt. But none of these things should supplant the bigger fixes.”
|