Feinstein Institutes’ Study Raises Concerns about ChatGPT’s Readiness for Medical Education

TL;DR:

  • Feinstein Institutes conducted a study to evaluate ChatGPT’s performance in the American College of Gastroenterology (ACG) Self-Assessment Tests.
  • ChatGPT versions 3 and 4 were tested and achieved success rates of 65.1% and 62.4% respectively, falling short of the passing threshold.
  • ChatGPT lacks an inherent understanding of subjects and may source information from questionable or outdated non-medical sources.
  • The study suggests that ChatGPT should not be used for medical education in gastroenterology at present.
  • Further research and development are needed before ChatGPT can be considered for implementation in the healthcare field.

Main AI News:

The potential of ChatGPT, an advanced natural language processing model developed by OpenAI, has been put to the test by researchers at Feinstein Institutes. Their study aimed to assess whether ChatGPT, specifically versions 3 and 4, could successfully pass the American College of Gastroenterology (ACG) Self-Assessment Tests, which are designed to evaluate performance in the ABIM Gastroenterology board examination.

To conduct the experiment, the researchers fed the exact questions from the 2021 and 2022 ACG tests into both versions of ChatGPT. The passing threshold for the assessment was set at 70% or higher. Each query and response from the ACG tests was copy-pasted into ChatGPT, resulting in 455 inquiries (with 145 omitted due to image requirements) being processed.

The outcomes of the study revealed that ChatGPT version 3 answered 296 out of 455 questions correctly, achieving a success rate of 65.1%. On the other hand, ChatGPT version 4 answered 284 questions correctly, resulting in a success rate of 62.4%. While these results demonstrate some level of competence, they fall short of meeting the passing criteria.

Dr. Andrew C. Yacht, the senior vice president of academic affairs and chief academic officer at Northwell Health, highlighted the mixed sentiments surrounding ChatGPT. While the model has generated considerable enthusiasm, there are also concerns about its accuracy and validity within the healthcare and education sectors. Dr. Yacht emphasized the need to approach the role of AI in these fields with a certain level of skepticism.

Although ChatGPT shows promise as a potential educational tool, the study suggests that it is not yet ready to receive medical specialty certification. The researchers believe that further progress and research are required before ChatGPT can be considered for implementation in the healthcare domain, particularly in gastroenterology education.

Dr. Arvind Trindade, an associate professor at the Feinstein Institutes’ Institute of Health System Science and senior author of the paper, expressed the importance of investigating the potential of ChatGPT in medical education. He acknowledged the current attention and interest surrounding AI applications across various industries, including healthcare. However, based on their research findings, Dr. Trindade concludes that ChatGPT should not be utilized for medical education in gastroenterology at present. He suggests that more development is needed before integrating ChatGPT into the healthcare field.

One of the limitations identified in ChatGPT’s performance is its lack of inherent understanding of subjects or problems. This could be attributed to several factors, such as limited access to paid subscription medical journals or ChatGPT’s reliance on potentially outdated or non-medical sources. Consequently, further research is necessary to enhance its reliability and ensure it meets the rigorous standards of medical education.

Conlcusion:

The Feinstein Institutes’ study reveals that ChatGPT, despite its potential as an educational tool, is not yet suitable for medical education in gastroenterology. The model’s performance in the ACG assessment fell short of the passing threshold, indicating limitations in its understanding of medical subjects. This finding highlights the importance of conducting thorough research and ensuring the reliability of AI tools before implementing them in the healthcare market.

While ChatGPT has generated enthusiasm, skepticism remains regarding its accuracy and validity. The market should approach AI applications in healthcare and education cautiously, recognizing the need for continued development and rigorous evaluation to meet the industry’s standards and expectations.

Source