When AI hurts patients, who pays?

The ideas and innovators shaping health care
Mar 29, 2024 View in browser
 
Future Pulse

By Daniel Payne, Gregory Svirnovskiy and Ruth Reader

WEEKEND READ

A doctor prepares a prescription on a computer screen on September 5, 2012 in Berlin, Germany.

Doctors using AI opens new questions about when medical care is good enough. | Adam Berry/Getty Images

AI’s entrance into clinical practices opens new questions about when health care is good enough.

That standard can change depending on who sets the bar — whether it’s health systems, insurance companies or doctors.

But changes in the medical malpractice system, whether in or out of court, suggest AI is causing doctors, lawyers and patients to reconsider whether care is satisfactory, Daniel reports.

The adoption of the technology could potentially shift who pays huge sums for medical mistakes.

But it could also change what doctors, patients and courts consider to be the standard of care.

There’s a risk that a doctor could be led astray by an AI system, which could lead to misdiagnosis or ill-informed treatment plans. But should doctors know when to ignore the systems — and be held responsible when they don’t?

Another concern is bubbling up among doctors: They’ll be held responsible if they make a mistake without the use of AI — with plaintiffs potentially arguing that the clinician didn’t use the best tools available.

Both worries are reasonable, experts across fields say.

But the changes may be inevitable at this point as health systems race to integrate the technology into their workflows.

Some health leaders say the changes are largely positive because better tools make for more effective clinicians.

But others see the rapid adoption of AI as an obstacle for doctors trying to make sense of a difficult set of symptoms.

WELCOME TO FUTURE PULSE

A pier leading to the ocean

Savannah, Ga. | Daniel Payne

This is where we explore the ideas and innovators shaping health care.

Animals are often blamed for diseases that end up sickening humans. But a new study suggests humans were found to pass more viruses to animals than they get from them, The Independent reports.

Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Ruth Reader at rreader@politico.com or Erin Schumaker at eschumaker@politico.com.

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp. 

DIAGNOSIS

BURIEN, WASHINGTON - APRIL 17: Sea Mar Burien Well Woman Day provides breast and cervical cancer screenings for Seattle women on April 17, 2021 in Burien, Washington. (Photo by Suzi Pratt/Getty Images for Hologic)

Cervical cancer screening rates differ between rural and urban centers — and researchers now believe they understand some factors at play. | Getty Images for Hologic

While cervical cancer screenings at community health centers are a key resource for low-income women, there’s a widening testing gap between urban and rural centers, according to researchers at the American Cancer Society — and it’s rooted in a lack of translation services, poverty and unemployment.

What we know: Cervical cancer screening rates were lower in rural CHCs than in urban ones in the years before and during the pandemic. Covid-19’s onslaught only widened those discrepancies, the new study published in the journal Cancer found.

By the numbers: From 2014 to 2019, up-to-date cervical cancer screenings in urban CHCs topped their rural counterparts by a 43 to 38.2 percent margin. That disparity grew during the pandemic to an eventual 49 percent to 43.5 percent margin.

Now, we know why.

Nearly 60 percent of the disparity was due to a high proportion of patients with limited English proficiency, according to the study.

Other factors at play: patients with incomes below the poverty level (12.3 percent), differences in unemployment (3.4 percent) and primary care physician density (3.2 percent).

“Increasing access to language translation services or adaptation of patient navigator interventions might improve completion and timeliness of cancer screening in CHCs and among patients with limited English proficiency, especially in rural CHCs,” Hyunjung Lee, the study’s lead author, said in a press release.

Why it matters: Almost 14,000 cases of invasive cervical cancer will be diagnosed in 2024, and around 4,360 women will die of the disease, according to the ACS. While rates of cervical cancer have been on a decline since the 1970s, cases for women in low-income areas have jumped in recent years.

THE LAB

Statues of the Mayo brothers, Dr. William J. Mayo and Dr. Charles H. Mayo sit on steps across the street from the Mayo Clinic.

Mayo Clinic researchers are testing hypothesis-driven AI to make sense of complex diseases. | Jim Mone/AP

A new class of artificial intelligence could help us better understand cancer.

Researchers at the Mayo Clinic are experimenting with hypothesis-driven AI to understand complex causes of cancer using a slew of different cell-level information, also known as omics data.

In traditional AI, developers dump data into an algorithm designed to learn and let it find hidden patterns in the data. That makes traditional AI good at recognition, like identifying an abnormality in an image. But in hypothesis-driven AI, researchers are training algorithms to answer questions.

“We can use the power of AI to validate scientific or medical hypotheses,” Choong Yong Ung, an assistant professor at Mayo Clinic’s Hu Li Lab, told POLITICO.

In addition to designing around a hypothesis, researchers use their expertise to incorporate their current understanding of how a disease works to curate training data, making it more efficient than traditional AI.

Why it matters: Cancer is a complex disease that research has shown is not just related to genes but to a confluence of factors that include the immune system, the tumor’s environment and a person’s lifestyle.

Hypothesis-driven AI, which tests a specific line of inquiry, has the potential to help researchers find better diagnostic methods and develop better drugs for cancer.

For example, in one study, researchers developed a hypothesis-driven AI to classify cancer of unknown primary origin. They hypothesized that given a person’s age, sex and genomic sequencing, they could appropriately classify a person’s cancer type. Though the AI predictions used retrospective data, patients treated for the cancer types that the AI predicted had significantly better outcomes.

Even so: Because the AI is trained on a more narrow dataset, this form of AI has the potential for bias. It may also fail to account for certain scenarios.

 

Follow us on Twitter

Carmen Paun @carmenpaun

Daniel Payne @_daniel_payne

Ruth Reader @RuthReader

Erin Schumaker @erinlschumaker

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post