AI in Healthcare: WHO Warns of Potential Pitfalls for Developing Nations

TL;DR:

  • WHO warns that AI-based healthcare technologies may pose risks to lower-income countries.
  • Emphasizes the importance of incorporating data from under-resourced regions in AI model training.
  • The rapid adoption of large multi-modal models (LMMs) in healthcare.
  • WHO issues updated guidelines to ensure AI benefits public health and avoids potential pitfalls.
  • Concerns about a global “race to the bottom” and “model collapse” due to inadequate AI regulation.
  • Call for collaborative leadership between governments, tech companies, and civil-society groups.
  • Risks of “industrial capture” as major corporations dominate AI research.
  • Recommendations for mandatory post-release audits, ethics training, and algorithm registration.

Main AI News:

The World Health Organization (WHO) has issued a stern warning about the potential dangers posed by the introduction of artificial intelligence (AI)-based healthcare technologies, particularly for lower-income nations. In a recent report outlining new guidelines on large multi-modal models (LMMs), the WHO emphasized the critical need to ensure that the development and deployment of AI technologies are not solely influenced by tech companies and affluent nations. Failure to incorporate data from under-resourced regions in the training of AI models could result in these populations being inadequately served by the algorithms, the organization cautions.

The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” warned Alain Labrique, the WHO’s director for digital health and innovation, during a media briefing.

The WHO had previously issued AI guidelines for healthcare in 2021, but the rapid growth in the capabilities and availability of LMMs prompted a reassessment less than three years later. These generative AI models, including the one powering the widely used ChatGPT chatbot, have seen unparalleled adoption, particularly in the healthcare sector. LMMs are capable of generating clinical notes, completing forms, and aiding physicians in diagnosing and treating patients. Numerous companies and healthcare providers are actively developing AI tools for medical applications.

The primary objective of the WHO’s updated guidelines is to ensure that the exponential expansion of LMMs contributes to and safeguards public health rather than jeopardizing it. The organization warns against a potential global “race to the bottom” scenario, where companies rush to release applications, even if they are ineffective or unsafe. There is also the concern of “model collapse,” where LMMs trained on inaccurate or false information could contaminate public information sources, including the internet.

Jeremy Farrar, the WHO’s chief scientist, emphasized, “Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks.

The WHO asserts that the responsibility for overseeing these powerful AI tools cannot rest solely with tech companies. It calls for collaborative leadership from governments worldwide in effectively regulating the development and utilization of AI technologies. Additionally, the involvement of civil-society groups and healthcare recipients is crucial at all stages of LMM development and deployment, including oversight and regulation.

The WHO report also highlights the risk of “industrial capture” in LMM development due to the high costs associated with training, deploying, and maintaining these programs. Notably, there is a trend of major corporations outpacing universities and governments in AI research, with a substantial exodus of doctoral students and faculty into the private sector.

The guidelines recommend mandatory post-release audits of LMMs deployed on a large scale, conducted by independent third parties. These audits should evaluate the tool’s effectiveness in protecting both data and human rights. Furthermore, the WHO suggests that software developers and programmers working on LMMs for healthcare or scientific research should undergo ethics training similar to that of medical professionals. Governments are encouraged to require early algorithm registration to promote transparency, the publication of negative results, and to prevent publication bias and unwarranted hype.

Conclusion:

The WHO’s warnings and updated guidelines underscore the need for responsible AI development in healthcare. Market players must prioritize equity, transparency, and ethics to harness the potential of AI while mitigating risks. Failure to do so may lead to regulatory challenges and public distrust, impacting the growth and reputation of the AI in healthcare market.

Source