TL;DR:
- Humana is facing a class-action lawsuit for allegedly using an AI algorithm to deny rehabilitation care to seniors recommended by their doctors.
- This is the second major health insurer to be sued over using AI to restrict necessary care for Medicare Advantage patients, following a similar case against UnitedHealth Group.
- The lawsuit highlights growing scrutiny of how insurers employ AI tools for coverage decisions while regulators struggle to keep up.
- The suit claims Humana replaced medical professionals’ judgment with an inaccurate algorithm and imposed strict performance targets on employees.
- JoAnne Barrows, an 86-year-old plaintiff, experienced a denial of her post-acute care and rehabilitation treatment, causing her health to deteriorate.
- The plaintiffs seek damages and an order to stop Humana from using the algorithm.
- Humana maintains that it uses “augmented” intelligence that includes human input in decision-making.
- Government bodies, including Congress and the Biden administration, are concerned about Medicare Advantage coverage denials and are calling for stricter oversight of AI algorithms.
Main AI News:
Health insurance companies are facing increasing scrutiny and legal challenges as they turn to artificial intelligence (AI) algorithms to make coverage decisions for Medicare Advantage patients. In a recent class-action lawsuit, Humana is accused of systematically denying seniors rehabilitation care recommended by their doctors using an AI algorithm. This lawsuit follows a similar one against UnitedHealth Group for its use of the same algorithm, highlighting a growing concern in the healthcare industry. As regulators race to catch up with these technological advancements, the implications for patient care and the insurance industry are significant.
The Rise of AI in Healthcare
The healthcare industry has witnessed a significant influx of AI tools and algorithms aimed at streamlining processes, reducing costs, and improving patient outcomes. Among these innovations, AI-powered algorithms have been employed by health insurers to assess and make coverage decisions. These algorithms are designed to predict the amount of care a patient needs and have replaced traditional medical professionals’ judgment in some cases.
The Allegations Against Humana
The class-action lawsuit filed against Humana claims that the company has been using an AI algorithm to deny elderly patients care that they are entitled to under Medicare Advantage plans. The lawsuit alleges that the algorithm was used in place of medical professionals’ judgment and was enforced with strict performance targets. Moreover, the suit contends that Humana continued using this algorithm despite being aware of its high inaccuracy.
A Personal Story
One of the plaintiffs, JoAnne Barrows, an 86-year-old Minnesota woman, fell at home and fractured her leg. Her doctor prescribed six weeks of post-acute care and rehabilitation. However, Humana stopped paying for her care after just two weeks, denying further rehabilitation treatment. Appeals were denied, and Ms. Barrows was considered ready to return home, despite her medical condition. Her family had to pay out of pocket for substandard care, causing her health to deteriorate.
The Plaintiffs’ Claims
The lawsuit asserts that Humana systematically uses this flawed AI model to deny claims, banking on the fact that only a small minority of policyholders will appeal denied claims. The plaintiffs are seeking damages and an order to prevent Humana from continuing to use the algorithm.
The Response from Humana
In response to the allegations, a Humana spokesman stated that the company uses “augmented” intelligence, which includes human input in decision-making when AI is employed. Coverage decisions, according to Humana, are based on healthcare needs, medical judgment, and guidelines from relevant healthcare authorities.
Government and Regulatory Response
These class-action lawsuits come at a time when Congress and the Biden administration are expressing concerns about coverage denials in Medicare Advantage plans. Lawmakers are investigating the frequency of care denials in Medicare Advantage compared to traditional Medicare. In November, a Senate panel questioned Medicare Advantage plans about their policies on coverage denials, specifically concerning AI-powered algorithms. House Democrats have also called for stricter oversight of AI algorithms in Medicare Advantage denials by the Centers for Medicare and Medicaid Services.
Looking Ahead
In January, federal rules will begin restricting how Medicare Advantage plans can use predictive algorithms to make coverage decisions. The outcome of these lawsuits and the impending regulatory changes will undoubtedly shape the future of AI utilization in healthcare and have far-reaching implications for patient care, insurance practices, and the legal landscape in the healthcare industry. As the healthcare industry grapples with the integration of AI, striking the right balance between automation and human judgment becomes paramount for ensuring patient welfare and fair insurance practices.
Conclusion:
The growing number of lawsuits and regulatory scrutiny surrounding the use of AI algorithms by Medicare insurers to deny care highlights a significant challenge in the healthcare market. While AI offers potential benefits in streamlining processes and reducing costs, there is a pressing need to strike a balance between automation and human judgment to ensure fair and accurate coverage decisions. These legal battles and increased oversight signal a growing demand for transparency and accountability in the use of AI in healthcare, which may reshape how insurers and healthcare providers employ AI technologies in the future.