WHO PAYS? The health system’s increasing interest in artificial intelligence as a clinical tool raises a new question: Who’s liable if the systems make a mistake? Millions of dollars in payments over medical mistakes are at stake — as well as broader questions about when care is considered good enough and what place AI tools have in American health care, Daniel reports. The issue is of concern to doctors, patients, trial lawyers, tech executives, lawmakers and regulators. And the questions aren’t just hypotheticals. The president of the American Medical Association, Dr. Jesse Ehrenfeld, said doctors are already seeing AI come up in medical malpractice cases. Courts have yet to come to a consensus on who will be considered responsible. So doctors are going on the offensive, asking lawmakers and regulators to proactively consider protections against liability when AI systems contribute to a case that ends up in court. They have allies in Congress, like Rep. Greg Murphy (R-N.C.), a urologist, who wrote to FDA Commissioner Robert Califf earlier this year to suggest a “safe harbor” for doctors and AI software makers if both agree to report on problems they find with the technology. Tech companies, trial lawyers and some hospitals have a different take: Doctors make the final calls in patient care and are ultimately responsible, even if software steers them wrong. That isn’t comforting to some doctors, who say AI systems could make it harder for them to think independently about cases. “The confidence with which AI posits its conclusions makes it really hard as a human to say, ‘Wait a minute, I need to question this,’” said Dr. Wendy Dean, president and co-founder of Moral Injury of Healthcare, an organization that advocates for doctors’ well-being. WELCOME TO MONDAY PULSE. We hope you had as good of a weekend as the bears in England who rode on a swan pedal boat. Reach us and send us your tips, news and scoops at bleonard@politico.com or ccirruzzo@politico.com. Follow along @_BenLeonard_ and @ChelseaCirruzzo.
|