TL;DR:
- Cigna Healthcare is facing a federal class action lawsuit for using algorithms to deny insurance claims in large batches, allegedly violating California law.
- The lawsuit is part of a series of AI-related claims filed by the advocacy law firm, Clarkson, against tech companies and now healthcare providers.
- The PxDx system used by Cigna is accused of rejecting around 300,000 claims in two months, with some claims being denied in just 1.2 seconds on average.
- Internal documents suggest bulk confirmation of algorithm decisions rather than individual review, leading to legal concerns.
- Cigna customers’ appealed claim denials saw an 80% overturn rate.
- The lawsuit highlights the debate over whether algorithms can provide adequate case review and adherence to state law.
Main AI News:
In a recent development, Cigna Healthcare finds itself embroiled in a federal class action lawsuit, alleging the company’s use of algorithms to deny insurance claims in large batches, all as part of an almost entirely automated claims decision process. The lawsuit, filed in California’s eastern district, accuses the company of violating California law, which mandates medical professionals to conduct “thorough, fair, and objective” reviews of insurance claims.
This legal battle takes center stage as part of a mounting series of AI-related claims filed by Clarkson, a prominent public advocacy law firm. Clarkson has previously targeted tech giants OpenAI and Google on behalf of creators who contend that AI systems unlawfully appropriated their data and creative output.
At the heart of the matter lies the question of whether an algorithm can genuinely fulfill the individual case review requirements mandated by California health insurance law. The crux of the issue pivots on whether only human review can meet the stringent standards outlined in the state’s legislation.
The contentious PxDx system (short for procedure-to-diagnosis) employed by Cigna came under scrutiny for rejecting around 300,000 pre-approved claims over a two-month period last year, with each claim allegedly being rejected in a mere 1.2 seconds on average. Most notably, a single Cigna medical director, Cheryl Dopke, reportedly rejected a staggering 60,000 claims in just one month.
Internal company documents have led investigative entities like ProPublica and Clarkson to assert that the only viable way to achieve such rapid decisions is through bulk confirmation of algorithm outcomes, rather than the individual review mandated by the law.
The lawsuit contends that Cigna unlawfully delegated its obligation to evaluate and investigate claims to the PxDx system, thereby misleading its California customers into believing their health plans would receive individual assessments of their claims.
Furthermore, data reveals that around 80 percent of the initial claim denials appealed by Cigna customers were ultimately overturned. This fact has sparked investigations from the California Department of Insurance, which, in collaboration with other state regulators, aims to delve into the claims presented in the lawsuit.
Amidst these developments, it is worth noting that two out of three Americans express concern over surprise medical bills, making the outcome of this lawsuit particularly significant.
In response to the accusations, the managing partner of Clarkson, Ryan Clarkson, asserts, “They’ve harnessed advancing technology not to improve people’s lives, but to summarily reject thousands of valid claims in the name of efficiency.“
Chair of the House Energy and Commerce Committee, Cathy McMorris Rodgers (R-Wash.), raised concerns in May, suggesting that the high rate of successful appeals against PxDx decisions indicates policyholders having to pay out-of-pocket for medical expenses that should be covered under their health insurance contracts.
Conclusion:
The legal action against Cigna Healthcare and its AI claims denial process is indicative of a larger issue surrounding the use of algorithms in the market. As the healthcare industry increasingly turns to automation and AI for efficiency gains, regulatory scrutiny and legal challenges may rise. Market players will need to carefully assess the balance between leveraging technology and upholding legal and ethical standards to avoid potential liabilities and reputation damage. Transparency and fairness in AI decision-making will become critical factors for success in the evolving market landscape.