WHO Warns of Potential Patient Harm Due to Unvalidated AI Tools

TL;DR:

  • The World Health Organization (WHO) warns about the inconsistent application of cautionary measures in the use of large language model (LLM) tools powered by AI.
  • WHO emphasizes the potential risks of errors by healthcare workers, harm to patients, erosion of trust in AI, and delayed long-term benefits if untested systems are rapidly adopted.
  • The agency proposes addressing concerns and gathering clear evidence of benefits before widespread implementation in healthcare.
  • AI-based tools like ChatGPT, Bard, and BERT can generate authoritative but incorrect responses, posing risks to health-related issues.
  • These tools can also be misused to spread convincing disinformation that is difficult to distinguish from reliable health content.
  • WHO advocates for careful examination of risks and ensuring patient safety when utilizing AI tools for improving health information access and diagnostic capacity.
  • Policymakers should prioritize ethical principles, appropriate governance, and patient protection during the commercialization of LLM tools.
  • The UN health agency published “Ethics and Governance of Artificial Intelligence for Health” as a guide on AI ethics ahead of a global agreement on the subject.

Main AI News:

Large language model (LLM) tools powered by AI are becoming increasingly prevalent in various domains, raising concerns over their unchecked implementation, as cautioned by the World Health Organization (WHO). The WHO emphasizes that the hasty adoption of untested systems could lead to detrimental errors by healthcare professionals, potentially harming patients and eroding trust in AI.

Such risks have the potential to undermine the long-term benefits and global utilization of these technologies. Therefore, the agency suggests addressing these concerns and establishing clear evidence of their advantages before their widespread integration into routine healthcare and medicine.

While acknowledging the potential benefits of employing AI-based tools to assist healthcare professionals, patients, researchers, and scientists, the WHO stresses the necessity for vigilance. This is particularly relevant given the rapid expansion of platforms like ChatGPT, Bard, BERT, and others, which aim to emulate human communication and comprehension. These innovative tools possess the capability to generate responses that may appear authoritative and plausible to end users.

The inherent danger lies in the possibility of these responses being completely inaccurate or containing significant errors, particularly in matters concerning health. WHO highlights the potential misuse of such tools to propagate persuasive disinformation, including textual, audio, or video content, which can be indistinguishable from reliable health information, posing a significant challenge for the general public.

To ensure the safe utilization of AI, it is essential to carefully assess the risks associated with these novel tools. They can be employed to improve access to health information, serve as decision-support tools, or enhance diagnostic capacity in under-resourced settings, thereby promoting public health and reducing inequity.

WHO emphasizes the need for policymakers to prioritize patient safety and protection while technology firms endeavor to commercialize LLM tools. The agency reiterates the significance of adhering to ethical principles and implementing appropriate governance. In line with this objective, the UN health agency published “Ethics and Governance of Artificial Intelligence for Health” in 2021, ahead of the adoption of the first global agreement on the ethics of AI.

Conlcusion:

the cautionary measures highlighted by the World Health Organization (WHO) regarding the implementation of large language model (LLM) tools in the healthcare sector have significant implications for the market. The potential risks of errors, patient harm, and erosion of trust in AI underscore the need for careful adoption and rigorous testing of these systems.

Market players involved in the development and commercialization of LLM tools must prioritize patient safety and protection while also addressing concerns and providing clear evidence of the benefits these technologies offer. Ethical principles and appropriate governance should be integral to the market’s approach to AI in healthcare, ensuring that these tools are utilized responsibly and in a manner that supports improved health outcomes and reduces inequities. Ultimately, the market’s response to these concerns will determine the long-term viability and acceptance of AI-powered tools in routine healthcare and medicine.

Source