Advice for the FDA on regulating AI

The ideas and innovators shaping health care
Sep 03, 2024 View in browser
 
Future Pulse

By Ruth Reader, Daniel Payne and Erin Schumaker

AROUND THE AGENCIES

FDA logo is shown.

Former FDA officials say there are gaps in the agency's regulation of artificial intelligence. | Jacquelyn Martin/AP

Concern about FDA regulation of artificial intelligence tools used in medicine is growing among former agency officials and other medical specialists.

That’s the bad news. The good: Plenty of people have ideas about how to improve it.

The latest example of their concern is a review in Nature Medicine of 521 FDA-authorized AI medical devices. Forty-two percent lacked published clinical validation data, according to the researchers from the University of North Carolina at Chapel Hill and other institutions.

That lack of rigor, they wrote, gets in the way of health systems that want to adopt the tools: “Patients and providers need a gold-standard indicator of efficacy and safety for medical AI devices.”

They called on the FDA to publish more clinical validation data and prioritize prospective studies.

Why it matters: The researchers are just the latest group to speak out about the lack of regulation of AI tools. 

A way forward: Dr. Scott Gottlieb, FDA commissioner under President Donald Trump, has proposed giving the agency expanded authority to look at how companies make artificial intelligence products. That, he said, is needed to respond to AI’s so-called black-box problem: a lack of clarity about how algorithms reach conclusions.

Gottlieb also said that the private sector could certify tools that don’t pose a risk to patients’ health.

Dr. Mark Sendak, who advises HHS’ national coordinator for health information technology, says any program for regulating AI must be localized since the tools’ performance can vary based on the setting.

Former FDA official Bakul Patel, who designed much of the FDA’s current approach to governing health AI and now works for Google, writes that the agency needs an upgrade to regulate most AI — both in increased staffing and improved post-market surveillance capabilities.

A new paradigm: What the FDA isn’t prepared to regulate is generative AI, the kind that can respond to questions, its critics contend.

Patel and Harvard University public health professor David Blumenthal want the agency to treat generative AI tools like people.

“Regulate them less like devices and more like clinicians,” they write.

Like doctors, the tools should take required training and tests to evaluate their capabilities, operate under supervision and take continuing education courses when new scientific evidence emerges.

The tools should be licensed and required to report to regulatory authorities on the quality of care they’re delivering. And data about the tools’ performance should be made public, Patel and Blumenthal contend.

WELCOME TO FUTURE PULSE

Kjosfossen Falls, Norway

Kjosfossen Falls, Norway | Shawn Zeller/POLITICO

This is where we explore the ideas and innovators shaping health care.

Cigarette smokers have lower Parkinson’s rates. Researchers suspect something in the smoke protects them.

Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Ruth Reader at rreader@politico.com, or Erin Schumaker at eschumaker@politico.com.

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp. 

TECH MAZE

The otolaryngologist doctor Duc Trung Nguyen shows on a computer screen where the olfactory nerves are, at the Nancy university hospital (CHRU) in Vandoeuve-les-Nancy, northeastern France, on October 28, 2020. Doctor Duc Trung Nguyen works on olfactory rehabilitation with Covid-19-infected patients, as well as all pathologies responsible for olfaction loss. (Photo by JEAN-CHRISTOPHE VERHAEGEN / AFP) (Photo by   JEAN-CHRISTOPHE VERHAEGEN/AFP via Getty Images)

AI's arrived in health care, amidst many questions about oversight. | AFP via Getty Images

Artificial intelligence experts at last week’s Responsible AI for Health Symposium in Washington compared notes on the safety, possibilities and governance models for the tech.

They also revealed a diversity of opinions about AI’s future in the health sector.

Three debates stood out at the event, hosted by the Johns Hopkins Center for Digital Health and Artificial Intelligence:

Are we ready? Few of the researchers, doctors and health system leaders in attendance said their employers were appropriately considering AI’s risks.

Those downsides could include algorithms that work better for some people than others, spread misinformation or violate privacy or intellectual property rights.

Most groups have no idea how to manage the pitfalls that can come with the technology, said Dr. Maia Hightower, CEO of Equality AI, an organization working to keep bias out of health AI.

Hightower said health systems should create a comprehensive plan for how they will oversee, audit, validate and monitor AI.

How will it affect health disparities? Experts don’t agree, suggesting the question will be a key area of research in the coming years.

Some researchers, such as Hightower, pointed to the many inequities that exist in the health system today — from how care and risk are calculated to how patients are described in clinician notes.

Others suggested disparities will grow because of the technology’s cost: Providers who serve poorer patients won’t be able to take advantage of the latest AI tools.

But some optimists at the event thought AI could create more access by making practices more efficient and expanding the capabilities of the health workforce.

Some algorithms could even offer health insights directly to consumers, bypassing the current system entirely.

What’s the government’s role? There was no consensus about who is ultimately responsible for making sure AI works as it should, but many of the technical and medical experts said they’re closely following what Congress and federal agencies do.

Their expectations and hopes go beyond regulatory frameworks. Some researchers said agencies should fund research and help assemble the data used to train, test and verify AI systems.

WASHINGTON WATCH

The US Department of Health and Human Services building is seen.

HHS' guidance on web tracking went too far, a judge ruled. | Alastair Pike/AFP via Getty Images

HHS relented in its fight with the American Hospital Association over its web-tracking guidance last week.

HHS’ Office for Civil Rights published guidance in 2023 warning hospitals that sharing with unapproved third parties IP-address information obtained with tracking software on public-facing websites violated federal health privacy rules.

The AHA sued and in June federal district court Judge Mark T. Pittman in Fort Worth, Texas, sided with the hospitals.

Now HHS has dropped its appeal.

Why it matters: Both HHS and the Federal Trade Commission have been looking for ways to regulate the sharing of health data that isn’t covered by HIPAA, the federal health data privacy law.

What’s next? HHS could still seek to restrict what health systems do with tracked data by proposing a new rule.

The agency did not respond to a request for comment.

 

Follow us on Twitter

Carmen Paun @carmenpaun

Daniel Payne @_daniel_payne

Ruth Reader @RuthReader

Erin Schumaker @erinlschumaker

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post