A study has found that AI has a better “bedside manner” compared to some doctors

TL;DR:

  • A study evaluated ChatGPT’s written advice compared to human doctors.
  • The study used data from Reddit’s AskDocs forum and found that ChatGPT’s responses were preferred by a panel of healthcare professionals 79% of the time.
  • The study highlights the potential for AI to assist in response generation for medical advice.
  • However, experts caution against relying solely on language models for factual information due to the potential for false “facts.”
  • The use of AI technology in healthcare must be approached with caution and proper evaluation.
  • Further research is needed to determine the extent to which AI technology like ChatGPT can assist physicians in response generation.

Main AI News:

A recent study has demonstrated the potential for AI to play a role in the medical industry, particularly through the use of AI assistants such as ChatGPT. The study, which was published in the journal JAMA Internal Medicine, evaluated ChatGPT’s written advice compared to that of human doctors and found that the AI language model was preferred by a panel of healthcare professionals 79% of the time.

The study used data from Reddit’s AskDocs forum, where members can post medical questions answered by verified healthcare professionals. ChatGPT was asked to respond to randomly selected questions that had already been answered by human doctors. A panel of licensed healthcare professionals, who were unaware of the source of the answers, rated the responses for quality and empathy.

The results showed that ChatGPT’s responses were rated as good or very good quality 79% of the time, compared to only 22% of doctors’ responses, and 45% of ChatGPT’s answers were rated as empathic or very empathic, compared to just 5% of doctors’ replies.

While the study does not suggest that ChatGPT can replace human doctors, it does highlight the potential for AI to assist in response generation for medical advice. However, experts caution against relying solely on language models for factual information, as they may generate false “facts.”

Additionally, humans may tend to trust machine responses too much, which could lead to improper evaluation of the AI’s advice. To mitigate these risks, experts suggest using random synthetic wrong responses to test vigilance.

The potential for AI in the medical industry is significant, and its use in drafting medical advice for review by clinicians is a promising area for early adoption. However, it is crucial to approach the use of AI technology in healthcare with caution and proper evaluation, as well as an understanding of the limitations of language models in generating accurate and trustworthy information.

In addition, while the study found that ChatGPT was preferred by the panel of healthcare professionals in terms of quality and empathy, it is important to note that the AI language model was specifically optimized to be likable, and its longer and more conversational answers may have played a role in its higher ratings. Further research is needed to determine the extent to which AI technology like ChatGPT can assist physicians in response generation and to ensure that its use in healthcare is appropriate and effective.

Conlcusion:

The study on ChatGPT’s role in the medical industry highlights the potential for AI technology in healthcare, presenting a promising opportunity for the healthcare market in the area of AI-assisted response generation for medical advice. However, caution and proper evaluation must be exercised in the use of AI technology, and further research and development are necessary to fully realize its benefits and ensure its appropriate application in healthcare.

Source