As we approach 2025, the prospect of comprehensive federal regulation of artificial intelligence (AI) in healthcare remains unlikely. While discussions and initiatives are underway, a cohesive regulatory framework has yet to materialize.
AI is currently in a regulatory gray area, and while healthcare providers may feel compelled to adopt AI technologies swiftly to maintain a competitive edge, or simply based on fear of missing out, it is important to know how things can go wrong. Hasty implementation without thorough vetting of AI vendors can lead to significant pitfalls, as we have seen recently.
The Consequences of Unvetted AI Adoption
In 2024, there were multiple instances that illustrate the dangers of premature AI integration in healthcare settings:
- UnitedHealthcare’s AI Model for Coverage Decisions: UnitedHealthcare implemented an AI algorithm to determine patient coverage, which reportedly overrode physicians’ judgments and led to premature discharge of patients from care facilities. This practice not only compromised patient care but also exposed the insurer to legal challenges. UnitedHealthcare is one of many payors currently facing lawsuits for use of AI in aiding the denial of claims.
- Faulty AI-Powered Transcription Tools: Hospitals have adopted AI transcription services to document patient interactions. However, tools like OpenAI’s Whisper have been found to produce inaccuracies, including fabricated information, potentially jeopardizing patient safety and leading to misinformation in medical records. One limitation of AI is that it can “hallucinate,” which is a technical term for when the AI model produces an inaccurate answer but conveys it with confidence that it is correct.
These are just some examples of how the rapid adoption of AI without proper safeguards can create serious risks in healthcare, ultimately jeopardizing patient safety and trust.
State-Level Incentives to Regulate AI in Healthcare
Although there seems to be a significant lag in implementing robust AI oversight at the federal level, some states, like California and Massachusetts, are stepping up to enact legislation that addresses these concerns. For instance, California has implemented AB 3030 and SB 1120, which focus on transparency in AI communications and the necessity of human oversight in medical decisions.
AB 3030 requires healthcare providers to clearly disclose when generative AI has been used to create patient communications. Additionally, it mandates that providers include instructions on how patients can contact a human healthcare provider, ensuring transparency and fostering trust in AI-generated interactions.
Similarly, SB 1120, also known as the “Physicians Make Decisions Act,” addresses the potential overreach of AI in healthcare by ensuring that important medical decisions are not made solely by algorithms. Specifically, the law requires that decisions about whether to approve or deny treatments—such as determining the necessity of certain procedures or medications—must involve a licensed healthcare professional who evaluates each case individually. California is wanting to make sure that AI supports, rather than replaces, the critical judgment of qualified medical providers in patient care.
These efforts not only seek to prevent potential abuses but also recognize and support the transformative role AI is going to play in shaping the future of healthcare.
Furthermore, states like Massachusetts and New York are exploring or advancing legislation aimed at mitigating AI-related risks. As state-level actions gain traction, they are setting the stage for more comprehensive approaches to AI governance across the healthcare industry.
Considerations for Healthcare Providers
To mitigate the risks associated with AI integration, healthcare providers must take a strategic and cautious approach.
The first step is to conduct comprehensive due diligence of the vendor by thoroughly assessing the vendor’s track record, the robustness of their AI solutions, and their compliance with existing healthcare standards. Paramount to any AI that has access to protected health information (PHI) is ensuring that AI tools adhere to stringent data protection protocols to safeguard sensitive patient information and maintain the highest standards of data security and privacy in accordance with HIPAA.
Providers should also consider implementing pilot testing phases, introducing AI solutions in controlled environments to evaluate their performance and address potential issues before full-scale deployment. Finally, maintaining human oversight is critical; AI tools should augment, not replace, clinical judgment, preserving the essential human element in patient care and ensuring that technology supports, rather than undermines, the quality of medical decision-making.
How Frier Levitt Can Help
Navigating the complexities of AI integration in healthcare requires informed decision-making and strategic planning. At Frier Levitt, we specialize in guiding healthcare providers through the legal and regulatory considerations of adopting new technologies. Our expertise ensures that your practice can leverage AI’s benefits while understanding potential risks.
Contact Frier Levitt today to safeguard your practice’s future.