TL;DR:
- Google introduces MedLM, a family of healthcare-focused generative AI models.
- MedLM is based on Med-PaLM 2 and aims to assist healthcare workers in various medical tasks.
- Two MedLM models are available: one for complex tasks and one for scalability across functions.
- Early adopters like HCA Healthcare and BenchSci are already benefiting from MedLM.
- Google competes with Microsoft and Amazon in the burgeoning healthcare AI market.
- Historically, AI in healthcare has faced challenges and controversies.
- WHO expresses concerns about the risks of using generative AI in healthcare.
- Google emphasizes responsible AI deployment and commitment to safety.
Main AI News:
In a groundbreaking move, Google is unveiling MedLM, a family of state-of-the-art generative AI models specifically tailored for the healthcare industry. This innovative endeavor aims to enhance the capabilities of healthcare workers by harnessing the power of artificial intelligence. Today, Google officially announces MedLM, building upon the foundation of Med-PaLM 2, a Google-developed model that has demonstrated “expert level” performance across numerous medical examination questions. This cutting-edge technology is now accessible to Google Cloud customers in the United States, with a preview available in select other markets, exclusively for those who have been whitelisted through Vertex AI, Google’s comprehensive AI development platform.
MedLM comprises two distinct models, each designed to cater to different aspects of healthcare. The first is a larger model, intended for handling intricate and complex tasks, while the second is a smaller, fine-tunable model optimized for scalability across various healthcare-related functions. According to Yossi Matias, VP of Engineering and Research at Google, “The effectiveness of a model for a specific task can vary based on the use case. For instance, one model may excel at summarizing conversations, while another may be more proficient at searching through medication data.”
Notably, MedLM has already found successful applications with early adopters. HCA Healthcare, a for-profit facility operator, has been piloting these models alongside physicians to streamline the process of drafting patient notes in emergency department hospital settings. Likewise, BenchSci has seamlessly integrated MedLM into its “evidence engine,” facilitating the identification, classification, and ranking of novel biomarkers.
Google’s strategic move into the healthcare AI market puts it in direct competition with industry giants like Microsoft and Amazon, as they race to secure a share of a market projected to be worth tens of billions of dollars by 2032. Amazon has recently launched AWS HealthScribe, a platform that utilizes generative AI to transcribe, summarize, and analyze patient-doctor conversations. Meanwhile, Microsoft is actively piloting various AI-powered healthcare solutions, including medical “assistant” applications driven by large language models.
However, it is crucial to approach the integration of AI in healthcare with caution. Historically, AI in healthcare has faced mixed results. Instances like Babylon Health, an AI startup backed by the U.K.’s National Health Service, have come under scrutiny for making claims that their disease-diagnosing technology outperforms human doctors. IBM’s Watson Health division faced setbacks, leading to a loss on its AI-focused venture after technical issues disrupted customer partnerships.
One might argue that generative models, such as those within Google’s MedLM family, are significantly more advanced than their predecessors. Nevertheless, research has indicated that these models may not be entirely accurate in addressing healthcare-related queries, even relatively straightforward ones. One study, co-authored by a group of ophthalmologists, compared ChatGPT and Google’s Bard chatbot responses to questions about eye conditions and diseases, revealing inaccuracies. Additionally, ChatGPT has been known to generate cancer treatment plans containing potentially fatal errors. Both ChatGPT and Bard have produced misguided and debunked medical ideas when queried about kidney function, lung capacity, and skin conditions.
The World Health Organization (WHO) has expressed concerns regarding the potential risks associated with using generative AI in healthcare. These risks include the generation of incorrect or harmful answers, the propagation of health-related misinformation, and the inadvertent disclosure of sensitive health data. The WHO emphasizes the need for caution and thorough testing when adopting new technologies in healthcare, as hasty implementations could lead to errors, patient harm, a loss of trust in AI, and the delayed realization of long-term benefits.
Google remains steadfast in its commitment to responsible AI deployment in healthcare, emphasizing its dedication to providing a safe and ethical approach to technology adoption. Yossi Matias reaffirms this commitment, stating, “[W]e’re focused on enabling professionals with a safe and responsible use of this technology and ensuring that these benefits are accessible to all.” As Google continues to pioneer innovative solutions in healthcare, the world watches with anticipation to see the transformative impact of MedLM and similar advancements in the field.
Conclusion:
Google’s MedLM represents a significant step forward in healthcare with its generative AI models. As the healthcare AI market intensifies, competition between tech giants like Google, Microsoft, and Amazon grows. However, caution is essential, considering past challenges in implementing AI in healthcare. Google’s dedication to responsible technology usage underscores its commitment to providing safe and ethical solutions for the healthcare industry, promising transformative potential in the field.