Bias Sensitivity in Diagnostic Decision-Making: Comparing ChatGPT ...
Diagnostic errors, often due to biases in clinical reasoning, significantly affect patient care. While artificial intelligence chatbots like ChatGPT could help mitigate such biases, their potential susceptibility to biases is unknown.

Evaluating Diagnostic Accuracy
This study evaluated the diagnostic accuracy of ChatGPT against the performance of 265 medical residents in five previously published experiments aimed at inducing bias. The biases studied were case-intrinsic (presence of salient distracting findings in the patient history, effects of disruptive patient behaviors) and situational (prior availability of a look-alike patient). ChatGPT’s accuracy in identifying the most likely diagnosis was measured.
Diagnostic accuracy of residents and ChatGPT was found to be equivalent. For clinical cases involving case-intrinsic bias, both ChatGPT and the residents exhibited a decline in diagnostic accuracy. Residents’ accuracy decreased by an average of 12%, while the accuracy of ChatGPT 4.0 decreased by 21%. Accuracy of ChatGPT 3.5 decreased by 9%. These findings suggest that, like human diagnosticians, ChatGPT is sensitive to bias when the biasing information is part of the patient history.

When the biasing information was extrinsic to the case in the form of the prior availability of a look-alike case, residents’ accuracy decreased by 15%. By contrast, ChatGPT’s performance was not affected by the biasing information. Chi-square goodness-of-fit tests corroborated these outcomes.
Implications and Future Research
It seems that while ChatGPT is not sensitive to bias when the biasing information is situational, it is sensitive to bias when the biasing information is part of the patient’s disease history. Its utility in diagnostic support has potential, but caution is advised. Future research should enhance AI’s bias detection and mitigation to make it truly useful for diagnostic support.

Topic: JGIM
Explore Policies: Join our newsletter