Healthcare Leaders Must Come Together to Mitigate ChatGPT Risks
Microsoft-backed OpenAI has gained over 100 million users since it launched ChatGPT five months ago. People use this AI service for various reasons, such as writing essays, chatting on dating apps, and producing cover letters. ChatGPT has already begun to enter the healthcare sector, and healthcare software giant Epic recently announced that it will integrate the latest version of the AI model into its electronic health record.
However, healthcare leaders should feel cautious about ChatGPT's entrance into the sector. During a panel at the HIMSS conference in Chicago, technology experts agreed that the AI model is exciting but needs guardrails as it becomes implemented into healthcare settings.
Healthcare leaders are beginning to explore the potential use cases for ChatGPT, such as assisting with clinical note-taking and generating hypothetical patient questions which medical students can respond to. However, there are significant risks involved, and risks that we probably won’t even know about yet.
Panelist Peter Lee, Microsoft’s corporate vice president for research and incubation, said his company did not expect to see this level of adoption happen so quickly, and urged healthcare leaders to familiarize themselves with ChatGPT to make informed decisions about whether this technology is appropriate for use, and if it is, under what circumstances.
Reid Blackman, CEO of Virtue Consultants, which provides advisory services for AI ethics, pointed out that the general public’s understanding of how ChatGPT works is quite poor. Most people think they are using an AI model that can perform deliberation, meaning that most users think that ChatGPT is producing accurate content and that the tool can provide reasoning about how it came to its conclusions.
But ChatGPT wasn’t designed to have a concept of truth or correctness—its objective function is to be convincing. It’s meant to sound correct, not be correct. Thus, healthcare leaders must develop a way of systematically identifying the ethical risks for particular use cases, as well as begin assessing appropriate risk mitigation strategies sooner rather than later.
One of the panelists, Kay Firth-Butterfield, CEO of the Center for Trustworthy Technology, was among the more than 27,500 leaders who signed an open letter last month calling for an immediate pause for at least six months on the training of AI systems more powerful than GPT-4. Firth-Butterfield raised some ethical and legal questions, including whether the data ChatGPT is trained on is inclusive, doesn't leave out three billion people without internet access, and who gets sued if something goes wrong.
The panelists agreed that these are all important questions that don’t really have conclusive answers right now. As AI continues to evolve at a rapid pace, they said that the healthcare sector has to establish an accountability framework for how it’s going to address the risks of new technologies like ChatGPT moving forward.