ARTICLE
4 February 2026

Utah Approves AI For Prescription Refill Process As States Test AI Governance Models

PL
Polsinelli LLP

Contributor

Polsinelli is an Am Law 100 firm with more than 1,200 attorneys in over 25 offices nationwide. Recognized by legal research firm BTI Consulting as one of the top firms for excellent client service and client relationships, Polsinelli attorneys provide value through practical legal counsel infused with business insight and focus on health care, real estate, finance, technology, private equity and corporate transactions.

This month, a leading global provider of professional services declared that business leaders face an unprecedented challenge: "moving beyond pilots to truly integrating AI into the hearts...
United States Utah Technology
Polsinelli LLP are most popular:
  • within Privacy, International Law and Intellectual Property topic(s)
  • in United States

Key Takeaways

  • Utah's approval of a new Regulatory Mitigation Agreement (RMA) to pilot AI in the prescription refill process adds to a growing wave of state activity, as the speed and scale of AI adoption has prompted states to consider hundreds of AI laws in a variety of contexts and industries.
  • Even as industry standards rapidly evolve, an expanding patchwork is emerging due to fears that the AI revolution is outpacing guardrails for consumers, developers and deployers.
  • Utah and Texas now offer structured pilot frameworks that may satisfy the AI Litigation Task Force's call for "minimally burdensome" oversight, offering potential models for AI developers seeking to test high-risk use cases within a structured regulatory environment.

This month, a leading global provider of professional services declared that business leaders face an unprecedented challenge: "moving beyond pilots to truly integrating AI into the hearts of their organizations." In Utah and a few other states, organizations are embracing pilot programs to mitigate risk and are collaborating with government agencies and stakeholders in the process.

Utah's latest AI initiative, a regulatory agreement focused on prescription refills, offers a window into how these programs are taking shape and how they might align with the federal push for "minimally burdensome" AI governance.

How Utah's AI Policy Uses RMAs to Manage Risk

In May 2024, Utah became the first state to regulate the use of AI by organizations and their interactions with consumers. Since then, Colorado, Texas and other states have passed similar legislation. As we noted a year ago, the Utah AI Policy Act (UAIPA) seeks to simultaneously increase consumer protections and encourage responsible AI innovation by:

  • Mandating transparency through consumer disclosure requirements, especially now with UAIPA amendments for "high-risk" interactions including health care data;
  • Clarifying liability for AI business operations, including key terms and legal defenses; and
  • Enabling innovation through a regulatory sandbox, RMAs and policy and rulemaking by an Office of Artificial Intelligence Policy (OAIP).

In 2024 and 2025, OAIP entered into RMAs with ElizaChat for an app schools can offer teens for mental health and with Dentacor to assist in diagnosing specific dental conditions. Earlier this month, OAIP entered into an RMA with Doctronic, a health care technology company, to streamline the prescription refill process.

RMAs are one example of how Utah aspires to balance AI's potential risks and rewards. The UAIPA defines RMAs as an agreement between a participant, OAIP and relevant state agencies and defines regulatory mitigation to include restitution to users, cure periods, civil fines where applicable and other terms that are tailored to AI technology seeking mitigation.

While not quite a safe harbor from all liability, RMAs are intended to provide AI developers, deployers and users with an opportunity to test for unintended consequences in a controlled environment. For example, the RMA with ElizaChat is notable for its multiple references to cybersecurity and various schedules.

RMAs May Meet Federal Calls for "Minimally Burdensome" AI Oversight

As previously noted, an Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence was ratified last month and seeks to sustain and enhance AI dominance through a minimally burdensome and national framework that preempts state laws that are deemed inconsistent with administration policies.

State courts will be a battleground for determining the viability of state laws and establishing new "industry standards" as legal duties for AI technologies, as well as AI laws that include safe harbors or mitigation mechanisms or that are narrowly tailored to specific risks like protecting children or preventing fraud.

As a mitigation mechanism, the Doctronic RMA appears minimal on its face. It states in several places that the relevant agency, Utah's Division of Professional Licensing, will forgo any enforcement actions for any unlawful conduct so long as Doctronic abides by its terms, and that the RMA was entered into to automate repetitive tasks associated with prescription refills.

According to outputs from ChatGPT to a basic prompt about the prescription refill process, problems are common and numerous. They include inappropriate refills, over-refilling and controlled substance misuse. In addition, problems with the process can have a disproportionately negative impact on the elderly and vulnerable.

Notable Features of the Doctronic RMA

Touted by OAIP as an emerging model that could reshape access to care and ultimately improve care outcomes, the RMA will last 12 months. It is 24 pages long, with six pages dedicated to agreement terms and 18 pages comprising various schedules.

Schedule A

Schedule A includes references to an effective plan and protocols to monitor and minimize any risks with Doctronic's technology.

Schedule B

Through its detailing of use cases, Schedule B discusses workforce limitations, clinician burnout and reduced access to essential services. Doctronic states that its AI system will process renewal requests for 30-, 60- or 90-day refills of medications that have been previously prescribed by licensed health care providers, based on acceptable standards of care.

Schedule B also includes a detailed project description that includes a Patient Overview and Clinical Workflow with protocols and subparts for such things as:

  • Initial access
  • Identity verification
  • Prescription verification
  • Secondary prescription verification
  • Comprehensive medical assessment
  • AI decision-making and safety protocols
  • Approval pathways

Further, Schedule B outlines Quality Assurance and Oversight as well as Safety and Testing Protocols, in addition to a Comprehensive Case Review Process. This latter process includes multiple phases, benchmarks, ongoing reviews and AI monitoring that provides real-time oversight of all system functions to identify potential risks. Monthly reports to OAIP are required.

Schedule B describes Performance Benchmarking protocols that must be maintained and shared with Utah's medical board. The benchmarks require prescription renewal information, including physician review and successful prescription details. Patient satisfaction scores and time-to-renewal efficiency are required measurements.

Schedule C

Schedule C includes a list of the 192 medications that are currently covered by the RMA, their disease or diagnoses (e.g., asthma, vitamin deficiencies, anxiety/depression) and class (e.g., inflammation, cardiovascular, mental health). Doctronic purports to have handled more than 14 million chats, helped millions of people and served thousands daily.

Key Questions for Evaluating AI Risk Mitigation Frameworks

The AI risk mitigation framework adopted by Doctronic, ElizaChat and Dentacor in Utah is one of many that have been proposed across various jurisdictions and industries, and we have routinely recommended both holistic and specific approaches, with an emphasis data collection practices.

We reiterate a basic five-factor test:

  1. What is the AI tool?
  • Is it traditional, generative or agentic AI?
  • What models or other tools does it utilize?
  • What are the legal concerns, potential biases or protections baked into the AI?
  1. What is the use case or issue?
  • Does this objective benefit from autonomous decision-making?
  • How critical or sensitive is the method by which the objective is achieved?
  • What is the tolerance for error?
  1. What is the data going into it?
  • Can the AI pull data solely from preapproved sources?
  • What can be accessed?
  • What is screened off or blocked?
  • What risk does external or public information potentially create?
  1. What are the outputs or actions?
  • What actions (or inaction) could occur and with what result?
  • What sectors of the business are impacted?
  • Is autonomous decision-making even permissible?
  1. How accurate is it?
  • How do we measure accuracy (false positives and negatives)?
  • What levels of error (under service level commitments) can we tolerate?
  • Do minor inaccuracies upset the purpose?

What This Means for Future AI Deployments

For developers that think their AI tools are today's version of "a killer app," an RMA framework might be worth it — though specific use cases and business considerations will be determinative. UAIPA offers a structured risk management framework that may appeal to developers seeking state-backed clarity for deploying AI in regulated environments.

For questions regarding AI risk mitigation frameworks and RMAs, please contact Romaine Marshall, Matt Todd, Jennifer Bauer or Bryce Bailey.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More