Skip to Main Content

Developers of artificial intelligence models slowly making their way into medicine have long parried ethical concerns with assertions that clinical staff must review tech’s suggestions before they are acted on. That “human in the loop” is meant to be a backstop preventing potential medical errors conjured up by a flawed algorithm from harming patients. 

And yet, industry experts warn that there’s no standard way to keep humans in the loop, giving technology vendors significant latitude to market their AI-powered products as helpful professional tools rather than as autonomous decision-makers. 

advertisement

Health record giant Epic is piloting a generative AI feature that drafts responses to patients’ email queries, but clinical staff must review the suggestions before they are sent out, the company has said. A flurry of AI-guided ambient documentation startups can rapidly transcribe and summarize patient visits and populate patients’ medical charts, but they require doctors and nurses to OK the generated entries first. Products predicting health risks — like overdose or sepsis — show up as flags in medical record software, and it’s up to clinicians to act on it. 

Get unlimited access to award-winning journalism and exclusive events.

Subscribe

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.