- Generative AI shows promise in drafting patient portal responses, reducing clinician workload.
- Human review remains essential to ensure accuracy and patient safety.
- AI responses, although effective, often lack directive advice for patients.
- Inappropriate AI responses can pose significant patient safety risks.
- Integration of AI in healthcare requires careful monitoring and ongoing training for clinicians.
Main AI News:
The latest findings suggest that leveraging generative AI can alleviate the workload on clinicians by generating responses to patient portal messages. However, a crucial step remains: human oversight before dispatching these responses, as emphasized by researchers from Mass General Brigham.
In a recent publication in The Lancet Digital Health, the researchers underscored the indispensable role of clinicians in appending necessary instructions to patient portal responses. “Generative AI offers a promising prospect of easing clinician burden while enhancing patient education,” stated Danielle Bitterman, MD, the corresponding author and a faculty member at Mass General Brigham’s Artificial Intelligence in Medicine (AIM) Program.
Patient portal messages constitute a significant time sink for clinicians, particularly amid other administrative responsibilities. A study published in JAMIA in 2023 highlighted a surge in patient portal messages during the pandemic, correlating with a 2.3-minute increase in EHR usage per message per day.
The study conducted at MGH Brigham revealed that employing generative AI and large language models (LLMs) could alleviate the strain of overflowing EHR inboxes. These models, akin to ChatGPT, can efficiently address patient queries sent via the portal, thereby relieving time constraints on healthcare providers. Some EHR vendors are already piloting this integration.
Nevertheless, caution is warranted, cautioned Bitterman, also a physician at Brigham and Women’s Hospital’s radiation oncology department. “Our team’s interaction with LLMs has raised concerns regarding potential risks associated with their integration into messaging systems,” she remarked. “Amid the growing prevalence of LLMs in EHRs, our study aimed to identify both their advantages and limitations.”
Utilizing GPT-4 from OpenAI, the researchers evaluated LLM-generated messages against provider-generated responses for 100 hypothetical patient queries. Six radiation oncologists evaluated and edited the AI responses, comparing them with human-crafted ones.
The results were promising, with AI responses deemed effective overall. Notably, reviewers mistook AI-generated responses for human-crafted ones 31% of the time. Moreover, 58% of AI responses required no human editing, often containing more comprehensive patient education content.
Despite their efficacy, AI responses necessitated human scrutiny before dissemination, advised the radiation oncologists. Directives for patients were frequently absent in AI responses, requiring clinician intervention. Moreover, while deemed safe in 82.1% of cases, inappropriate AI responses posed potentially disastrous consequences in some instances.
A small percentage of LLM-generated responses (7.1%) posed patient safety risks, with 0.6% carrying a risk of mortality due to inadequate urgency in advising patients to seek medical attention.
As healthcare delves deeper into integrating LLMs to enhance care, leaders mustn’t overlook their potential in alleviating administrative burdens. Nevertheless, addressing patient safety concerns stemming from AI delegation remains imperative, as underscored by Bitterman.
“Maintaining human oversight is pivotal for ensuring safety when employing AI in medicine, yet it’s not a panacea,” Bitterman emphasized. “As reliance on LLMs grows, oversight mechanisms, clinician training in supervising LLM output, enhanced AI literacy for both patients and providers, and strategies for rectifying LLM errors become increasingly crucial.”
Conclusion:
The emergence of generative AI in patient care communication presents significant opportunities to streamline processes and enhance patient education. However, the necessity for human oversight underscores the importance of maintaining quality and safety standards. Healthcare providers and technology developers must collaborate to address challenges and maximize the benefits of AI integration while ensuring patient safety remains paramount.