- with Senior Company Executives, HR and Finance and Tax Executives
- with readers working within the Law Firm industries
On 11 December 2025, President Trump signed an executive order entitled "Ensuring a National Policy Framework for Artificial Intelligence", which signals a significant escalation in efforts to establish federal primacy over state-level AI regulation in the United States.
The order establishes several mechanisms through which the Administration intends to challenge, constrain, and ultimately pre-empt state AI laws that it considers inconsistent with the goal of maintaining US dominance in artificial intelligence development.
Background and Stated Rationale
The executive order builds upon Executive Order 14179 of 23 January 2025 ("Removing Barriers to American Leadership in Artificial Intelligence"), which revoked the Biden Administration's October 2023 AI Executive Order. The present order articulates the Administration's view that state-level AI regulation poses a threefold threat to US AI competitiveness.
First, the order contends that a multiplicity of state regulatory regimes creates compliance challenges, particularly for start-up enterprises that lack the resources to navigate fifty distinct legal frameworks. Second, the Administration asserts that certain state laws compel AI developers to embed what the order characterises as "ideological bias" within their models. The order explicitly identifies Colorado's algorithmic discrimination legislation as an example, suggesting that such laws may require AI systems to produce results that the Administration considers inaccurate in order to avoid differential treatment or impact on protected groups. Third, the order raises concerns about the extraterritorial effect of state laws that may impermissibly regulate conduct occurring beyond their borders, thereby impinging upon interstate commerce.
Establishment of an AI Litigation Task Force
Section 3 of the order directs the Attorney General to establish an AI Litigation Task Force within 30 days. The Task Force's mandate is to challenge state AI laws that the Administration considers inconsistent with the policy of maintaining US AI dominance through minimal regulatory burden. The order identifies several potential legal bases for such challenges, including claims that state laws unconstitutionally regulate interstate commerce, are pre-empted by existing federal regulations, or are otherwise unlawful in the Attorney General's judgement.
The Task Force is required to consult periodically with the Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the White House Counsel regarding the identification of specific state AI laws warranting legal challenge.
Commerce Department Evaluation of State AI Laws
Section 4 requires the Secretary of Commerce to publish, within 90 days, an evaluation of existing state AI laws. This evaluation must identify laws that the Department considers onerous and in conflict with the Administration's policy objectives, as well as laws that should be referred to the AI Litigation Task Force for potential legal challenge.
At a minimum, the evaluation must identify state laws that require AI models to alter what the Order terms their "truthful outputs", or that may compel AI developers or deployers to disclose or report information in a manner that the Administration considers to violate the First Amendment or other constitutional provisions. The Order also permits the evaluation to identify state laws that the Administration views as promoting AI innovation consistent with its policy objectives.
Conditioning of Federal Funding
Section 5 introduces a mechanism for leveraging federal funding to discourage state AI regulation. Within 90 days, the Assistant Secretary of Commerce for Communications and Information must issue a Policy Notice specifying conditions under which states may be eligible for remaining funding under the Broadband Equity Access and Deployment (BEAD) Programme. The Policy Notice must provide that states identified as having onerous AI laws pursuant to the Section 4 evaluation are ineligible for non-deployment funds, to the maximum extent permitted by federal law.
Beyond broadband funding, the order directs all executive departments and agencies to assess their discretionary grant programmes and determine whether such grants may be conditioned on states either refraining from enacting AI laws that conflict with the Administration's policy, or entering into binding agreements not to enforce such laws during the period in which they receive federal funding.
Federal Communications Commission Proceeding
Section 6 directs the Chairman of the Federal Communications Commission to initiate a proceeding, within 90 days of the Commerce Department's evaluation, to determine whether to adopt a federal reporting and disclosure standard for AI models that would pre-empt conflicting state requirements. This provision contemplates the possibility of establishing a uniform federal disclosure regime that would supersede the varied transparency and reporting obligations that several states have enacted or are considering.
Federal Trade Commission Policy Statement
Section 7 requires the Chairman of the Federal Trade Commission to issue a policy statement, within 90 days, on the application of the FTC Act's prohibition on unfair and deceptive acts or practices to AI models. Notably, the policy statement must explain the circumstances under which the Administration considers state laws requiring alterations to the "truthful outputs" of AI models to be preempted by the FTC Act's prohibition on deceptive practices. This represents an attempt to characterise compliance with certain state algorithmic fairness requirements as potentially constituting federally prohibited deceptive conduct.
Legislative Recommendation
Section 8 directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare a legislative recommendation establishing a uniform federal policy framework for AI that would pre-empt state AI laws conflicting with the Administration's policy objectives.
The order specifies certain categories of state law that the legislative recommendation should not propose pre-empting. These carve-outs include state laws relating to child safety protections, AI compute and data centre infrastructure (other than generally applicable permitting reforms), and state government procurement and use of AI. The Order reserves the possibility of additional carve-outs to be determined subsequently.
Implications for Stakeholders
The executive order represents the most comprehensive federal attempt to date to constrain state-level AI regulation in the United States. For organisations operating in the US market, the order creates a period of significant regulatory uncertainty as the various mechanisms it establishes begin to operate.
Several state laws may face immediate scrutiny, including Colorado's, which imposes obligations on developers and deployers of high-risk AI systems to avoid algorithmic discrimination. California's various AI-related legislative efforts, including proposals that were vetoed by Governor Newsom in 2024 and subsequent measures, may also attract attention from the AI Litigation Task Force.
The order's characterisation of algorithmic fairness requirements as compelling "ideological bias" or requiring AI systems to produce results that are not "truthful" raises substantial questions about the Administration's understanding of how such systems function and the policy objectives underlying non-discrimination requirements. The framing also creates potential tension with existing federal civil rights frameworks that prohibit discriminatory outcomes in various contexts.
The funding conditionality provisions may prove particularly consequential for states that have enacted or are considering AI legislation. The prospect of losing eligibility for federal broadband funding or discretionary grants creates a significant financial incentive for states to refrain from AI regulation, regardless of whether such laws would ultimately survive legal challenge.
For multinational organisations, the order underscores the divergence between the US regulatory trajectory and that of other major jurisdictions, most notably the European Union. While the EU AI Act establishes a comprehensive risk-based regulatory framework with substantial compliance obligations, the present Order signals the US Administration's preference for minimal federal regulation combined with active efforts to prevent state-level requirements from filling the regulatory gap. Organisations operating across both jurisdictions will need to maintain compliance frameworks capable of accommodating these fundamentally different regulatory philosophies.
Conclusion
The executive order's various provisions will unfold over the coming months, with the Commerce Department evaluation, FTC policy statement, and FCC proceeding all required in the early part of 2026, and the AI Litigation Task Force to be established within 30 days. The legislative recommendation contemplated by Section 8 would require Congressional action, the prospects for which remain uncertain given the current composition of Congress.
What is clear is that the Administration has signalled an intent to use the full range of executive tools at its disposal to constrain state-level AI regulation, while simultaneously pursuing a legislative strategy to establish federal preemption on a more permanent basis. Organisations with US operations or market exposure should monitor these developments closely and consider the implications for their AI governance frameworks and US market strategy.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.