The Role of ChatGPT in Emergency Care: A Deep Dive

Published On Wed Oct 09 2024
The Role of ChatGPT in Emergency Care: A Deep Dive

When It Comes to Emergency Care, ChatGPT Overprescribes | UC ...

If ChatGPT were cut loose in the Emergency Department (ED), it might suggest unneeded x-rays and antibiotics for some patients and admit others who didn’t require hospital treatment, a new study from UC San Francisco has found.

The researchers said that, while the model could be prompted in ways that make its responses more accurate, it’s still no match for the clinical judgment of a human doctor. "This is a valuable message to clinicians not to blindly trust these models,” said postdoctoral scholar Chris Williams, MB BChir, lead author of the study, which appears in Nature Communications.

ChatGPT's Performance in Emergency Care

Recently, Williams showed that ChatGPT, a large language model (LLM) that can be used for researching clinical applications of AI, was slightly better than humans at determining which of two emergency patients was most acutely unwell, a straightforward choice between patient A and patient B.

With the current study, Williams challenged the AI model to perform a more complex task: providing the recommendations a physician makes after initially examining a patient in the ED. This includes deciding whether to admit the patient, get x-rays or other scans, or prescribe antibiotics.

Navigating the Integration of FDA Cleared AI Tools in Clinical ...

For each of the three decisions, the team compiled a set of 1,000 ED visits to analyze from an archive of more than 251,000 visits. The sets had the same ratio of “yes” to “no” responses for decisions on admission, radiology, and antibiotics that are seen across UCSF Health’s Emergency Department.

AI Recommendations and Accuracy

The Impact of Artificial Intelligence in the Emergency Department ...

Using UCSF’s secure generative AI platform, which has broad privacy protections, the researchers entered doctors’ notes on each patient’s symptoms and examination findings into ChatGPT-3.5 and ChatGPT-4. Then, they tested the accuracy of each set with a series of increasingly detailed prompts.

Overall, the AI models tended to recommend services more often than was needed. ChatGPT-4 was 8% less accurate than resident physicians, and ChatGPT-3.5 was 24% less accurate.

Williams highlighted the AI’s tendency to overprescribe, suggesting it could be because the models are trained on the internet. He emphasized the importance of finding the right balance in AI applications to avoid unnecessary interventions that could harm patients.

Challenges in AI Implementation

He noted that models like ChatGPT will need better frameworks for evaluating clinical information before they are ready for the ED. Researchers and the clinical community need to carefully consider how these AI models should perform in real-life clinical settings.

Clostridium difficile and other adverse events from overprescribed ...

Developing medical applications of AI requires striking a balance between safety and efficiency, ensuring that AI models don’t trigger unnecessary exams and expenses while not missing serious conditions.

As the technology evolves, the clinical community must continue to refine the use of AI in healthcare settings, keeping in mind the nuances and challenges associated with implementing such models.

Authors: Additional authors include Brenda Miao, Aaron Kornblith, and Atul Butte, all of UCSF. Funding: The Eunice Kennedy Shriver National Institute of Child Health and Human Development and the National Institutes of Health (K23HD110716).

For more information, please refer to the original study.