Pharmacy Technology: The Role and Risks of Artificial Intelligence in the Pharmacy

Article

Artificial intelligence (AI) has been the hottest topic in healthcare technology (which Frier Levitt has recently discussed) and is being used in the hopes to streamline services for a fraction of the cost. Many have wondered how AI, defined as the capacity of computers or other machines to exhibit or simulate intelligent behavior, when used in a pharmacy, could offer significant benefits such as improving efficiency, enhancing patient care, and reducing human error. While the potential for creating a more efficient system seems appealing, it is important to recognize potential risks of using AI in a pharmacy, as discussed below.

First and foremost, the use of AI is still evolving, especially as it relates to pharmacy technology, and there are many regulatory uncertainties. Pharmacists may face legal risks if AI systems make incorrect recommendations or if there is insufficient human oversight. Misuse of AI or errors made by the system could result in legal liability, malpractice suits, or regulatory penalties. If AI systems are poorly integrated into pharmacy operations or staff are not adequately trained to use them, it could lead to misuse or errors. AI systems are not infallible. If the AI model is not properly trained or tested, it may produce incorrect drug recommendations, dosage errors, or miss important drug interactions, possibly resulting in serious patient harm, such as adverse drug reactions, overmedication, or ineffective treatment. AI is meant to serve as a support tool rather than a decision maker. Pharmacists are reminded that their clinical judgment and expertise is more valuable than AI when making clinical decisions. It is imperative that a pharmacy establishes systems that detect and flag potential AI errors for review by a pharmacist or healthcare professional before recommendations are acted upon. 

The American Society of Health-System Pharmacists’ House of Delegates meeting at the 2024 Pharmacy Futures meeting unanimously agreed and encouraged the adoption of policies on implementing and evaluating AI tools while also emphasizing ethical considerations to guide the technology’s use. Although most states have yet to address or regulate AI in the pharmacy, some states have started to implement or propose healthcare laws pertaining to the regulation of AI. The Illinois Safe Patients Limit Act ensures that hospitals and other healthcare facilities are prohibited from adopting a policy that substitutes independent nursing judgements from a registered nurse for decisions or recommendations made by algorithms, artificial intelligence, or machine learning. Virginia already enacted a law that amends the Code of Virginia in relation to hospitals, nursing homes, and certified nursing facilities. Virginia HB2154 requires these facilities to both establish and apply policies regarding the access and use of intelligent personal assistants, an electronic device that uses AI software to assist users with basic tasks.

Additionally, there are data privacy and security risks. AI systems often require access to sensitive patient data, including health records, medication history, and personal information. This increases the risk of data breaches and cyberattacks. Unauthorized access to patient data could lead to identity theft, privacy violations, and loss of trust in healthcare providers. To enhance data protection in using AI, pharmacies can use strong encryption methods to protect patient data during storage and transmission, ensuring it is not easily accessible to unauthorized users. Where possible, pharmacies can also use de-identified patient data for AI analysis to reduce the impact of potential breaches. In the past, the University of Chicago Medical Center (UCM), Stanford, and the University of California San Francisco all had data privacy issues when they sold access to EHR patient data to Google. Allegedly, UCM shared EHRs without appropriate removal of identifiable data and physician notes without first obtaining patient consent for sharing this identifiable data. From there, it was alleged that Google exploited this healthcare data to update its AI diagnostic and search algorithms.

Finally, there are ethical concerns in using AI when treating vulnerable patients. Using AI to optimize costs could lead to decisions that prioritize financial efficiency over patient well-being, such as recommending cheaper but less effective treatments.  AI systems must be examined and tested for ethics and ensure they are not worsening existing disparities in socioeconomic class, color, ethnicity, religion, gender, and/or disability. Otherwise, bias disproportionately affects these disadvantaged individuals, who then suffer from a less accurate standard of care.

This matter further demonstrates the importance of properly using AI to mitigate regulatory and legal risks mentioned above. By addressing these areas, pharmacies can reduce the risks of AI and ensure that its benefits—such as enhanced efficiency, better patient care, and reduced errors—are maximized. Critical oversight, regular audits, transparency, and well-defined accountability are key to ensuring the safe and effective use of AI in pharmacy practice.

How Frier Levitt Can Help

Frier Levitt has decades of experience assisting in evaluating products, negotiating agreements with AI programs, and navigating state and board requirements.  Regardless of the size of your pharmacy or the amount at stake, Frier Levitt is ready and able to assist you.  Our experienced life sciences attorneys can guide your pharmacy in implementing artificial intelligence and ensuring compliance with applicable laws.  If you have questions or would like to discuss further, contact us to speak to an attorney.