Concern about FDA regulation of artificial intelligence tools used in medicine is growing among former agency officials and other medical specialists. That’s the bad news. The good: Plenty of people have ideas about how to improve it. The latest example of their concern is a review in Nature Medicine of 521 FDA-authorized AI medical devices. Forty-two percent lacked published clinical validation data, according to the researchers from the University of North Carolina at Chapel Hill and other institutions. That lack of rigor, they wrote, gets in the way of health systems that want to adopt the tools: “Patients and providers need a gold-standard indicator of efficacy and safety for medical AI devices.” They called on the FDA to publish more clinical validation data and prioritize prospective studies. Why it matters: The researchers are just the latest group to speak out about the lack of regulation of AI tools. A way forward: Dr. Scott Gottlieb, FDA commissioner under President Donald Trump, has proposed giving the agency expanded authority to look at how companies make artificial intelligence products. That, he said, is needed to respond to AI’s so-called black-box problem: a lack of clarity about how algorithms reach conclusions. Gottlieb also said that the private sector could certify tools that don’t pose a risk to patients’ health. Dr. Mark Sendak, who advises HHS’ national coordinator for health information technology, says any program for regulating AI must be localized since the tools’ performance can vary based on the setting. Former FDA official Bakul Patel, who designed much of the FDA’s current approach to governing health AI and now works for Google, writes that the agency needs an upgrade to regulate most AI — both in increased staffing and improved post-market surveillance capabilities. A new paradigm: What the FDA isn’t prepared to regulate is generative AI, the kind that can respond to questions, its critics contend. Patel and Harvard University public health professor David Blumenthal want the agency to treat generative AI tools like people. “Regulate them less like devices and more like clinicians,” they write. Like doctors, the tools should take required training and tests to evaluate their capabilities, operate under supervision and take continuing education courses when new scientific evidence emerges. The tools should be licensed and required to report to regulatory authorities on the quality of care they’re delivering. And data about the tools’ performance should be made public, Patel and Blumenthal contend.
|