Ethically incorporating artificial intelligence into cancer care is a tall order. That’s according to a small survey published in JAMA Network Open, in which researchers asked oncologists whether they thought patients should have to consent to AI’s use in their cancer treatment. The survey of 204 U.S.-based oncologists was conducted from November 2022 to July 2023. By the numbers: — 85 percent of the oncologists said that while they need to understand how AI tools work, their patients didn’t necessarily need to. — 81 percent of the oncologists said patients should have the right to opt out of AI’s use in their treatment. — 47 percent of the oncologists thought liability related to AI’s use in medicine should be doctors’ shared responsibility, while 91 percent thought AI developers should be responsible. — 77 percent of the oncologists said they felt responsible for protecting their patients from AI bias, but only 28 percent felt confident in their ability to do so. The responses were at times paradoxical, the researchers wrote, pointing to the oncologists’ view that patients don’t need to understand AI but should have the option to refuse its use. There was also a gap between oncologists’ perceived responsibility to protect patients and their ability to do so. “Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care,” the researchers wrote. Why it matters: For now, it’s not clear who should be held liable when AI harms patients, for example, by providing an incorrect diagnosis or suggesting an inappropriate treatment. And, as Daniel reported earlier this year, health care quality, patient rights and millions of dollars in malpractice payouts are at stake.
|