TL;DR:
- AI assistants like ChatGPT are being studied for their potential role in medical consultations.
- A study comparing doctors and ChatGPT found that the AI tool exhibited higher empathy and quality in responses.
- ChatGPT’s responses must still be reviewed by healthcare professionals due to the possibility of errors and misinformation.
- AI tools have the potential to simplify medical jargon and support healthcare professionals in administrative tasks.
- Care must be taken with AI-generated responses, as they can introduce biases and should not replace human expertise.
- Ethical considerations include the need for legal frameworks, data protection, and ensuring responsible use of AI in clinical practice.
- AI can help streamline processes and improve patient outcomes, but doctors should retain the final decision-making authority.
Main AI News:
In the realm of medical consultations, the inquiry about the risk of swallowing a toothpick elicits two distinct responses. The initial answer indicates that within a timeframe of two to six hours following ingestion, it is likely that the toothpick has safely traversed into the intestines, emphasizing that many individuals swallow toothpicks without adverse consequences.
However, it also advises patients to seek emergency medical attention if they experience “stomach ache.” The second response echoes a similar sentiment, assuring patients that while it is normal to feel concerned, serious harm is unlikely to occur as toothpicks are small and made of non-toxic wood. Nevertheless, if the patient exhibits symptoms such as “abdominal pain, difficulty swallowing or vomiting,” consulting a doctor is recommended. It further adds, “It’s understandable that you may be feeling paranoid, but try not to worry too much. It is highly unlikely that the toothpick will cause you any serious harm.”
Although the two responses convey a similar message, their approach differs slightly. The first response is more clinical and concise, originating from a doctor, while the second emanates from ChatGPT, an artificial intelligence (AI) generative tool that has revolutionized the world. This experiment, part of a study featured in the prestigious Jama Internal Medicine journal, aimed to explore the potential role of AI assistants in the field of medicine.
By comparing the responses of real doctors and the chatbot to patient queries on an internet forum, the study drew conclusions based on the analysis of an external panel of health professionals, who were unaware of the source of the responses. Astonishingly, the panel found that ChatGPT’s responses exhibited higher quality and greater empathy than those of the real doctor in 79% of cases.
The proliferation of AI tools has sparked a debate on their potential use in healthcare. ChatGPT, for instance, seeks to assist healthcare professionals by alleviating administrative burdens and facilitating medical procedures. It aspires to replace the often unreliable and misinformed Dr. Google on the streets.
Experts interviewed by EL PAÍS acknowledge the tremendous potential of AI in healthcare but caution that the field is still in its infancy. They highlight the need for fine-tuning the regulatory framework governing the application of AI in medical practice to address ethical concerns. Moreover, they emphasize that AI is fallible and susceptible to errors. Consequently, any information generated by the chatbot must undergo a final review by a healthcare professional.
Paradoxically, the machine—the AI chatbot—emerges as the more empathetic voice in the Jama Internal Medicine study, at least in written responses. Josep Munuera, the head of the Diagnostic Imaging Service at Hospital Sant Pau in Barcelona, Spain, and an expert in digital technologies applied to health, argues that empathy encompasses a broader spectrum of factors than the study can analyze. Written communication differs significantly from face-to-face interactions, and posing questions on an online forum is not equivalent to medical consultation.
Munuera asserts, “When we talk about empathy, we are addressing multiple aspects. Currently, it is challenging to replace non-verbal language, which holds immense importance when a doctor communicates with a patient or their family.” However, Munuera concedes that generative tools like ChatGPT have immense potential in simplifying complex medical terminology. He explains, “In written communication, technical medical language can be convoluted, and we may struggle to translate it into understandable terms. These algorithms likely find equivalent alternatives to technical jargon and adapt them for the intended recipient.”
Joan Gibert, a bioinformatician and a prominent figure in AI model development at the Hospital del Mar in Barcelona, highlights an additional variable to consider when comparing the empathy of doctors and chatbots. He points out that two concepts intermingle in the study: ChatGPT itself, which can be valuable in certain scenarios and exhibits the ability to string words together to create an empathetic impression, and the issue of burnout among doctors. The emotional exhaustion experienced by clinicians while caring for patients may hinder their ability to express empathy effectively.
However, caution must be exercised when relying on responses from ChatGPT, just as with the well-known Dr. Google. Despite the chatbot’s apparent sensitivity and kindness, experts warn that it is not a doctor and can provide inaccurate information. Unlike other algorithms, ChatGPT is generative, meaning it generates responses based on the data it has been trained on, sometimes inventing information.
Gibert explains that these chatbots can experience “hallucinations” and may provide incorrect answers. He cautions, “Depending on the situation, it could present information that is untrue. The chatbot arranges words coherently and, due to its vast information database, it can be valuable. However, it necessitates review; otherwise, it may inadvertently fuel fake news.” Munuera emphasizes the significance of understanding the database on which the algorithm is trained, as the quality of responses is contingent upon the quality of the underlying data.
Beyond the confines of the doctor’s office, the potential applications of ChatGPT in healthcare are limited due to the possibility of misinformation. Jose Ibeas, a nephrologist at Parc Taulí Hospital in Sabadell, Spain, and secretary of the Big Data and Artificial Intelligence Group of the Spanish Society of Nephrology, notes that ChatGPT proves useful for providing initial layers of information by synthesizing and offering assistance.
However, when it comes to complex pathologies or specialized areas, its usefulness diminishes, and it may even provide erroneous information. Munuera concurs, stating, “It is not an algorithm that helps resolve doubts. When requesting a differential diagnosis, it might fabricate a disease.” Similarly, the AI system may reassure a patient that nothing is wrong when there actually is an underlying issue. This can lead to missed opportunities for proper medical evaluation, as patients might heed the advice of the chatbot instead of seeking guidance from a qualified professional.
Where experts perceive the greater potential for AI lies in its role as a support tool for healthcare professionals. For instance, it could aid doctors in responding to patient messages, albeit under supervision. The Jama Internal Medicine study suggests that such a tool could improve workflow and patient outcomes. By promptly addressing more patient inquiries with empathy and high standards, unnecessary clinical visits could be reduced, freeing up resources for those truly in need. Additionally, messaging services can promote patient equity, particularly benefiting individuals with limited mobility, irregular work hours, or concerns about medical expenses.
The scientific community is also exploring the use of AI tools for automating repetitive tasks like form filling and report generation. By streamlining these processes, AI could alleviate the workload of doctors and potentially enhance the quality of reports. However, researchers acknowledge the challenges associated with training algorithms, which require extensive data sets, as well as the risk of “depersonalization of care” that could engender resistance to the technology.
Ibeas stresses that for any medical applications, these tools must undergo rigorous scrutiny, and the division of responsibilities must be clearly established. He argues, “The systems should never make decisions. The final sign-off must always be given by a doctor.“
As the integration of these tools into clinical practice progresses, Gibert raises several ethical considerations. He emphasizes the need for a legal framework, integrated solutions within hospital structures, and stringent safeguards for patient data to prevent reckless sharing. Furthermore, he highlights the potential introduction of biases by AI solutions, including ChatGPT or diagnostic models, which could impact how doctors interact with patients and influence their decision-making. Gibert advises that the diagnostic results from AI models be withheld from doctors until after they have reached their own independent conclusion.
A group of researchers from Stanford University echoes the sentiment that AI tools can contribute to a more humanized approach to healthcare. They stress the importance of ascribing meaning to medical concepts while establishing a trusted partnership with patients to foster healthier lives. They express hope that emerging AI systems will alleviate the burdensome tasks overwhelming modern medicine, enabling physicians to refocus their attention on treating human patients.
Conclusion:
The integration of AI tools like ChatGPT in healthcare has the potential to reshape medical consultations and patient support. The study’s findings, showing AI’s ability to provide empathetic and high-quality responses, highlight its value as a resource for healthcare professionals.
However, the limitations and risks associated with AI-generated information emphasize the continued importance of human review and supervision. AI can streamline administrative tasks, simplify medical language, and potentially improve patient outcomes. Ethical considerations such as data protection and preventing biases need to be addressed. The market can expect increased adoption of AI tools as support systems in healthcare, with a focus on complementing and augmenting human expertise rather than replacing it entirely.