ARTICLE
4 August 2025

Watch What The Federal Government Does, Not What It Says, About Artificial Intelligence

IG
IR Global

Contributor

IR Global is a multi-disciplinary professional services network that provides legal, accountancy and financial advice to both companies and individuals around the world. Our membership consists of the highest quality boutique and mid-sized firms who service the mid-market. Firms which are focused on partner led, personal service and have extensive cross border experience.
Since the Trump Administration's start, its speak has been all about "removing the barriers to American leadership in artificial intelligence."
United Kingdom Technology

Since the Trump Administration's start, its speak has been all about "removing the barriers to American leadership in artificial intelligence." That so-named Executive Order states that U.S. policy under President Trump is "to sustain and enhance America's global AI dominance." Nowhere does the Executive Order convey a balancing of any other objective against the pursuit of worldwide AI dominance.

What a surprise, then, to read the White House Office of Management and Budget (OMB)'s Memorandum M-25-21 to non-security Federal agencies about Federal AI use. Although the memorandum encourages federal agencies to "accelerate the Federal use of AI," OMB, in doing so, asserts that Federal agencies "must redefine AI governance as an enabler of effective and safe AI innovation." So, maybe, after all there is some Trump Administration recognition of the need for guardrails in developing and deploying AI.

The guardrails that OMB outlines for Federal agencies with respect to so-called "high-impact AI" are guardrails from which private U.S. companies may draw. These guardrails are particularly instructive for those companies that use AI to inform, influence, decide, or execute a decision or action that may affect an important or protected individual or organizational consideration.

Determining and Documenting "High-Impact AI" Use Cases

One AI governance area of particular OMB attention is determining and documenting Federal agency use cases for AI that are "high-impact." According to OMB, "high-impact AI" is AI for which the "output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety."

OMB describes the types of individual and organizational rights and safety interests that might be implicated by "high-impact AI." They include:

  • "civil rights, civil liberties, or privacy";
  • "access to education, housing, insurance, credit, employment";
  • "access to critical government resources or services"; and
  • "human health and safety".

OMB goes a step further and identifies examples of Federal agency AI use cases that OMB presumes to be "high-impact." Examples of "high-impact AI" use cases in healthcare—which is the industry on which my law practice focuses—include:

  • "Medically relevant functions of medical devices";
  • "Patient diagnosis, risk assessment, or treatment";
  • "Allocation of care in the context of public insurance"; and
  • "Control of health-insurances costs and underwriting."

Each of those healthcare-related examples pertains to a matter for which the use of AI in generating an output that informs, influences, decides, or executes a decision or action would generally be viewed by most people (not just Americans) as "high impact." That is because each listed matter relates to "human health," either in terms of accessing safe and appropriate health care or accessing affordable health insurance to pay for health care.

Applying Minimum Risk Management Practices to "High-Impact AI" Use Cases

OMB mandates that Federal agencies apply a minimum set of practices to manage risks arising from the "high-impact AI" use cases the agencies determine and document. Those risks, says OMB, include "risks related to the efficacy, safety, fairness, transparency, accountability, appropriateness or lawfulness of a decision or action" resulting from AI's use to "inform, influence, decide, or execute" the decision or action. The seven minimum practices, says OMB, for managing those risks for a "high-impact AI" use case are:

  • Conduct pre-deployment testing of the AI.
  • Complete a pre-deployment impact assessment of the AI.
  • Conduct ongoing monitoring and periodic human review of the AI to detect and mitigate adverse impacts in the AI's performance or security post-deployment.
  • Ensure sufficient training and assessment of the human operators of the AI and the human users of the AI output.
  • Ensure human oversight, intervention and accountability for the "high-impact AI" use case commensurate with the risks the use case presents.
  • Enable individuals negatively impacted by a decision or action resulting from the "high-impact AI" use case to appeal the decision or action with a human review.
  • Seek and account for input from the individuals and organizations affected by the "high-impact AI" use case.

These risk management practices have appeal in managing the efficacy, safety, fairness, transparency, accountability, appropriateness, and/or lawfulness risks of "high-impact AI" use cases. That's because they describe actions to be taken. Those actions may be taken not only by Federal agencies—but by private companies—that have determined and documented their "high-impact AI" use cases. Just because these risk management practices are not law to which private American companies are subject, these risk management practices provide a counterbalance to the unrestricted acceleration of AI use and innovation.

Key Takeaways:

  • The US Federal Approach Shows Signs of Balancing AI Innovation With Oversight
    Despite early rhetoric about "global AI dominance", the Trump-era OMB Memorandum M-25-21 signals a pragmatic shift—requiring US federal agencies to incorporate governance principles that promote safe and effective AI deployment, especially for "high-impact AI" applications.
  • 'High-Impact AI' Defined by Risk to Rights, Safety and Public Welfare
    Use cases are deemed "high-impact" when AI outputs materially affect individual rights, safety, or access to critical services such as health care, education, or housing. Businesses—especially those in the health or insurance sectors—can look to these definitions when assessing risk and liability.
  • Voluntary Guidelines Offer Practical Guardrails for the Private Sector
    While not binding for private companies, the seven OMB-prescribed risk management practices (including pre-deployment testing, human oversight, and user appeal rights) offer a valuable framework for managing legal, ethical and reputational risk in AI deployment.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More