A manifesto for AI’s self-regulators

The ideas and innovators shaping health care
Feb 27, 2024 View in browser
 
Future Pulse

By Daniel Payne, Gregory Svirnovskiy, Ruth Reader, Carmen Paun and Erin Schumaker

FORWARD THINKING

A doctor prepares a prescription on a computer screen on September 5, 2012 in Berlin, Germany.

The medical profession will need to set best practices for AI and convey them to regulators, a group of health care luminaries said. | Adam Berry/Getty Images

Whether artificial intelligence improves patient care and strengthens the institutions that provide it will depend largely on the technology’s implementers in the health care field, not on government regulators.

That was the consensus of participants at a symposium supported by Harvard, Microsoft, Apple and other organizations last year, writes a group of medical luminaries in The New England Journal of Medicine's new AI journal.

A group of attendees, including Dr. Zak Kohane, editor of NEJM AI; Rupa Sarkar, editor of The Lancet Digital Health; and Peter Lee, head of Microsoft Research, has issued a manifesto of sorts, aimed at ensuring that the best practices for AI in health care, as they see them, are conveyed to regulators.

Among those best practices:

— AI systems should improve patient health or experience, reduce costs or increase efficiency of care delivery.

— AI should, for now, support clinicians, not replace them.

— Health systems should seek to include representative patient information in the data undergirding AI systems, while also ensuring patient privacy and data security.

Why it matters: Some health tech leaders are concerned that AI, if poorly implemented, will exacerbate bias in medical decision making and add to providers’ paperwork burdens.

But there’s also hope that AI could bring real, positive change to the system should leaders implement it with the right goals in mind.

 

CONGRESS OVERDRIVE: Since day one, POLITICO has been laser-focused on Capitol Hill, serving up the juiciest Congress coverage. Now, we’re upping our game to ensure you’re up to speed and in the know on every tasty morsel and newsy nugget from inside the Capitol Dome, around the clock. Wake up, read Playbook AM, get up to speed at midday with our Playbook PM halftime report, and fuel your nightly conversations with Inside Congress in the evening. Plus, never miss a beat with buzzy, real-time updates throughout the day via our Inside Congress Live feature. Learn more and subscribe here.

 
 
WELCOME TO FUTURE PULSE

National Arboretum, Washington, D.C.

National Arboretum, Washington, D.C. | Shawn Zeller/POLITICO

This is where we explore the ideas and innovators shaping health care.

Here’s another type of student debt relief: Starting in August, the Albert Einstein College of Medicine in the Bronx won't charge to attend after one of its former professors, Ruth Gottesman, donated $1 billion her late husband had accumulated in Berkshire Hathaway stock to cover tuition.

Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Ruth Reader at rreader@politico.com or Erin Schumaker at eschumaker@politico.com.

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp. 

TECH MAZE

Democratic presidential nominee Hillary Clinton supporters watch the result unfold on a giant screen during election night at the Jacob K. Javits Convention Center in New York on November 8, 2016.

AI must capture the full range of human diversity, researchers say. | ANGELA WEISS/AFP via Getty Images

To ensure AI works consistently well regardless of a patient’s race or ethnicity, its algorithms will need more refined and consistent data.

That’s what researchers from institutions including the University of Oxford and University College London found after analyzing British National Health Service data for 62 million people.

How’s that? They found that ethnicity was included in the records of 51 million of the individuals. The data was inconsistent in some patients’ records.

The researchers were able to increase the number of patients identified by ethnicity by cross-referencing patient data sets for primary care and hospitalizations.

But more granular patient ethnicity data wasn’t always available. And patients with missing data tended to be male, younger and healthier and from specific parts of the U.K.

Takeaway: “Because AI-based healthcare technology depends on the data that is fed into it, a lack of representative data can lead to biased models that ultimately produce incorrect health assessments,” said Sara Khalid, an associate professor at Oxford, in a release.

 

YOUR TICKET INSIDE THE GOLDEN STATE POLITICAL ARENA: California Playbook delivers the latest intel, buzzy scoops and exclusive coverage from Sacramento and Los Angeles to Silicon Valley and across the state. Don't miss out on the daily must-read for political aficionados and professionals with an outsized interest in California politics, policy and power. Subscribe today.

 
 
CHECKUP

Doctor Jacques Khalil shows, on the screen of a computer a picture of the English variant of the Covid-19, at the University hospital institute in Marseille, southern France, on January 11, 2021. (Photo by Christophe SIMON / AFP) (Photo by /AFP via Getty Images)

Health systems are embracing AI, but sometimes without all the facts, a consulting firm found. | CHRISTOPHE SIMON/AFP via Getty Images

As health systems embrace AI, most aren’t giving much thought to the nascent regulatory environment. Only 40 percent of health care professionals surveyed by Berkeley Research Group said they’re reviewing AI rules ahead of rolling out the technology.

“They're falling in love with these solutions, and you have departments rolling them out,” James McHugh, managing director at the consulting firm and an expert on health care automation, told Ruth. “But they’re not actually organized internally to manage these initiatives.”

Berkeley Research Group surveyed 150 health care professionals across a spectrum of large and small health systems for a new report on AI and the future of health care.

What it found: Three-quarters of health professionals believe AI will be widespread in the industry within three to four years. They think it will impact diagnostics, preventive screenings and the ability of doctors to predict health outcomes.

But more than half of health professionals are concerned about the accuracy of AI tools and whether they will make hospitals vulnerable to data privacy and cybersecurity issues.

The big picture: McHugh said that to ensure AI is used safely and effectively, health systems must invest in AI expertise. He added that all health systems should have an internal team of experts who vet and manage AI products.

He points to the University of California, San Diego’s Center for Health Innovation, which develops and manages new health care technology across the medical center.

Such an approach works well for an academic medical center in a metropolitan hub, but rural and community hospitals might be left behind. “They don't have money to invest in this,” McHugh said.

 

Follow us on Twitter

Carmen Paun @carmenpaun

Daniel Payne @_daniel_payne

Ruth Reader @RuthReader

Erin Schumaker @erinlschumaker

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to salenamartine360.news1@blogger.com by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Post a Comment

Previous Post Next Post