AI in Health Insurance: Claims, Regulation, and Concerns
AI in Healthcare: California Takes Aim at Algorithm-Driven Denials
California is poised to become the first state to regulate the use of artificial intelligence (AI) in health insurance prior authorization, a move that comes amid growing concerns about algorithmic bias and patient access to care.
Starting in January, a new law will require insurers to be more clear about how they use AI to determine coverage for treatments. This follows a national conversation sparked by the tragic death of UnitedHealthcare CEO Brian Thompson, who was denied coverage for a life-saving treatment by his own company’s algorithms.
Prior to the rise of AI, physicians at insurance companies manually reviewed treatment requests. Now, insurers increasingly rely on algorithms to analyze patient data and make coverage decisions.
“The hospital submits detailed information about the patient’s condition to the insurer,” explains Jeff Marr, a researcher at Johns Hopkins University studying the use of algorithms in insurance. “The insurer then uses this data in their algorithm to determine the level of care needed,which forms the basis of their decision.”
While AI can streamline administrative tasks and perhaps save costs, critics argue that it can also lead to unfair and inaccurate denials.
Ryan Clarkson, managing partner of Clarkson Law Firm, represents patients who claim they’ve been denied necessary care due to flawed algorithms. “We’ve seen cases where patients undergo surgery, their doctor prescribes 21 days of physical therapy, but the insurer cuts them off after 10 days, demanding they pay out of pocket,” he says.
Clarkson believes the use of AI in insurance claims and requests is widespread. “I suspect anyone who has received a medical bill in the past few years has likely interacted with AI in some way,” he asserts.
The lack of transparency from the insurance industry makes it difficult to determine the extent to which algorithms are used. However, experts like Nicholson Price, a law professor at the University of Michigan, warn that these algorithms are often trained on limited datasets, which can perpetuate biases and lead to inaccurate decisions.
“Medical care requires nuance,” Price emphasizes. “Reducing complex, personal health decisions to a machine-driven process raises serious ethical concerns.”
California’s new law aims to address these concerns by requiring greater transparency and accountability from insurers. Whether it will be enough to ensure fair and equitable access to care remains to be seen.
California takes Bold Step Towards Regulating AI in Healthcare
NewsDirect3.com – California is set to become a pioneer in the regulation of artificial intelligence (AI) used in healthcare, specifically focusing on prior authorization processes within health insurance. Beginning in January, a new law will mandate greater transparency from insurers regarding thier use of AI algorithms in determining coverage for medical treatments.
This move comes amidst a nationwide discussion ignited by the tragic case of UnitedHealthCare CEO Brian Thompson, whose own company’s algorithms denied him coverage for life-saving treatment.This incident has brought the potential dangers of algorithmic bias and its impact on patient access to care into sharp focus.
Prior authorization procedures, once handled manually by physicians at insurance companies, are increasingly being automated through AI algorithms. these algorithms analyze patient data to assess the purported necessity of proposed treatments and inform coverage decisions.
Jeff Marr, a researcher at Johns hopkins University studying the request of algorithms in insurance, explains, “The hospital provides the insurer with detailed facts about the patient’s condition. This data is then processed by the insurer’s algorithm to determine the appropriate level of care, which ultimately influences their coverage decision.”
While proponents argue that AI can streamline administrative processes and potentially reduce costs, critics raise concerns about unfair and inaccurate denials. Ryan Clarkson,managing partner of Clarkson Law Firm,represents patients who claim to have been wrongly denied necessary care due to flawed algorithms. he cites examples were patients, despite doctor recommendations for extended physical therapy, are abruptly cut off and forced to pay out-of-pocket for remaining treatments.
Clarkson believes the use of AI in insurance claims and requests is pervasive, stating, “I suspect anyone who has received a medical bill in the past few years has likely interacted with AI in some way.”
The lack of transparency from the insurance industry makes it challenging to ascertain the full extent to which algorithms are deployed. However, experts like Nicholson Price, a law professor at the University of Michigan, caution that these algorithms are often trained on limited datasets, potentially perpetuating biases and leading to inaccurate decisions.
Price emphasizes,“Medical care requires nuance. Reducing complex, personal health decisions to a machine-driven process raises serious ethical concerns.”
california’s new law aims to address these concerns by mandating increased transparency and accountability from insurers. The efficacy of these measures in ensuring fair and equitable access to care remains to be seen.
