Texas Senator Pushes for Stricter AI Regulations in Health Insurance Claims
AI in Health Insurance: A Double-Edged Sword
The health insurance industry is increasingly turning to artificial intelligence (AI) to streamline processes like patient claims. AI promises faster results, reduced administrative costs, and fewer delays. However, as Texas lawmaker Senator Charles Schwertner highlights with his recently proposed legislation, this growing reliance on AI also comes with serious risks—especially when AI systems replace human expertise in critical medical decisions.
Schwertner introduced a bill on January 16 that aims to limit the role of AI in healthcare decision-making. Specifically, the legislation would prohibit health insurance companies from using AI as the sole factor in determining whether a medical procedure or service is necessary. Decisions like these, Schwertner argues, should remain in the hands of trained physicians or licensed healthcare providers.
This is not merely a Texas issue. Across the nation, the use of AI in healthcare has sparked concerns about fairness, transparency, and accuracy. Insurers claim that AI helps make their operations more efficient, but critics warn that these algorithms may deny patients the care they need, sometimes overriding doctors’ recommendations.
Concerns Over AI Bias and Transparency
One of the biggest challenges of using AI in healthcare is its “black box” nature—an inherent lack of transparency. Even the developers of advanced AI technologies sometimes struggle to explain how these systems arrive at their decisions. Will Fleisher, a Research Assistant Professor at Georgetown University’s Center for Digital Ethics, points out that this lack of clarity could have profound impacts on patients.
“If a claim is denied by a complex AI system,” Fleisher explained, “the company might not be able to explain why it was denied. Even if they understand the reason, they could use the system’s opacity as an excuse to avoid providing an explanation.”
Bias is another pressing issue. AI systems are only as good as the data they are trained on, and historically, datasets can reflect existing inequalities. For example, algorithms might unintentionally favor certain demographic groups over others, producing results that could disadvantage marginalized communities. Critics argue that, in many cases, these biases could mean life-or-death consequences for patients unable to easily access or afford necessary care.
How Other States Are Tackling AI Regulation
Texas joins a growing list of states working to rein in the use of AI in healthcare. California, Georgia, New York, and Pennsylvania have already passed legislation aimed at addressing similar concerns. At least 40 states have introduced or enacted regulations on AI in 2024, according to Bloomberg Law.
These legislative moves follow growing complaints from patients and lawsuits against major insurers. Companies like Humana, UnitedHealth, and Cigna have faced accusations of improperly using AI algorithms to deny care. For instance, some lawsuits claim these systems were developed to reduce costs, even if that meant overriding doctors’ healthcare recommendations. The resulting impact forced patients to either pay out-of-pocket or go without essential treatments.
Texas’s approach, if enacted, would also give the state’s Department of Insurance the authority to investigate whether companies are adhering to new AI rules. This could provide much-needed oversight in an area where technological advancements have so far outpaced meaningful regulation.
The Human Element in Healthcare Decisions
Patients’ advocacy groups argue that no computer program—no matter how advanced—can account for the complexities of human health. Katherine McLane, spokesperson for the Texas Coalition for Patients, emphasized the need for balance. “Patients deserve a healthcare system that treats them as human beings, not data points,” McLane said. “AI may have its place in streamlining operations, but when it comes to life-altering medical decisions, nothing can replace doctors’ expertise and patients’ unique needs.”
For Senator Schwertner, this isn’t just about policy; it’s about values. While acknowledging the incredible potential of AI to support healthcare systems, he remains firm that there are limits to its use. “We simply cannot and should not solely rely on algorithms to understand the complexities and unique needs of patients,” Schwertner said.
Moving Forward Responsibly
Schwertner’s proposed legislation is scheduled for a potential rollout on September 1, 2025, but it is still under review. This raises an essential question for governments, companies, and the public alike: How can AI be responsibly integrated into healthcare systems without compromising patient care?
One clear path forward is striking a balance. AI can assist with more routine or administrative aspects of healthcare—such as sorting claims or identifying patterns in large datasets—while leaving critical, individualized decisions to people. Combining the efficiency of machines with the ethical oversight of human professionals could allow healthcare to benefit from the best of both worlds.
Transparency and accountability must also remain top priorities. Health insurance providers using AI should be required to clearly explain how their algorithms work and why specific decisions are made. Independent audits and robust state or federal oversight could help ensure models are functioning as intended, without unintentional bias or harm.
Finally, any healthcare system involving AI should prioritize education about how the technology works. Patients, doctors, and lawmakers alike need a clear understanding of both its potential benefits and its limitations. These informed discussions will be vital in shaping policies that protect individual health while also harnessing AI’s immense potential.
AI has already begun to revolutionize healthcare, but with life-altering consequences on the line, it’s clear we need to proceed with caution. By investing in thoughtful, patient-centered oversight, we can ensure that this innovative technology serves to enhance, not replace, the trusted human care at the heart of medicine.