- within Technology topic(s)
- in United States
- with readers working within the Technology industries
- within Technology, International Law and Corporate/Commercial Law topic(s)
In concert with Trump Administration prioritization of AI development and investment in the United States, the Federal Trade Commission ("FTC" or the "Commission") has set out a new trajectory for its AI enforcement efforts, including in some cases the setting aside of previous consent orders. Most pointedly, on December 22, 2025, the FTC set aside a final order against Rytr, an artificial intelligence ("AI") company, in part because the order "unduly burdens" AI innovation.
FTC actions signal that, though the enforcement path may be narrower, the Commission does not intend to ignore AI companies and products in their entirety. Rather, a dual approach to FTC AI enforcement is developing.
The Commission previously alleged Rytr's customer reviews writing service could potentially assist users in committing fraud. The Commission alleged that Rytr's technology allowed users to produce up to thousands of genuine-appearing customer reviews that could be posted and used to deceive consumers. The Commission stated that the Rytr set-aside order followed the Trump Administration's AI Action Plan but noted the FTC still would continue to pursue enforcement against companies that "deceive consumers about the capabilities of their generative AI."
Following the Rytr order, the FTC appears to have staked out a dual approach to regulation of AI: Reduced and in some cases revoked enforcement related to the actual capabilities of AI products — even in cases where the AI could potentially be used to trick consumers — but continued enforcement related to false statements companies make about the capabilities of their AI products.
The latter is consistent with FTC regulation of false advertising under Section 5 of the Federal Trade Commission Act ("FTC Act"). The FTC's recent enforcement actions appear consistent with this approach and suggest how the FTC will regulate AI consumer concerns throughout the rest of the second Trump Administration.
The Trump Administration's AI Action Plan shifted FTC enforcement priorities
In the first days of his second term, President Trump announced that his administration would promote development of AI innovation in many ways, including by rolling back AI regulation.
President Trump issued an executive order in January 2025 establishing that, "[i]t is the policy of the United States to sustain and enhance America's global AI dominance"; instructing federal agencies to "identify" and then "suspend, revise, or rescind" any actions thwarting AI innovation; and calling upon his advisors to draft a detailed AI action plan to implement the new AI national policy.
The resulting July 2025 AI Action Plan began with a command to federal agencies to reduce enforcement against AI companies, specifically calling on the FTC to (1) review ongoing investigations "to ensure that they do not advance theories of liability that unduly burden AI innovation," and (2) review "all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation."
This approach differs significantly from that of the Biden Administration FTC, which took a more cautious approach to AI and its potential for consumer fraud, including through its informal guidance and Operation AI Comply enforcement sweep.
Under the initiative, the FTC pursued numerous enforcement actions under Section 5 for alleged "AI Washing," deceptive claims about the capabilities of AI. Operation AI Comply also included a Section 5 action against Rytr for the unfair practice of providing consumers the means to "pollute" the market with fake reviews. In heeding the AI Action Plan, the FTC appears to have now narrowed its course.
The FTC has curtailed enforcement against AI companies
The current FTC appears to have largely ceased enforcement actions against AI companies for the capabilities of their products. No new enforcement actions of this kind are apparent and, consistent with the Administration's AI Action Plan, the FTC has taken the uncommon step of sua sponte reopening and setting aside its final consent order against Rytr, something it appears to have done only one or two other times in the past two decades.
Rytr is an AI-programmed writing assistant. The Commission filed a September 2024 complaint against Rytr, alleging the company's "service generates detailed reviews that contain specific, often material details that have no relation to the user's input." When users would go on to post the AI-drafted fake reviews, sometimes by the thousands, both consumers who rely on the reviews and honest competitors who lose business because of them would be harmed.
The FTC claimed Rytr's service provided the means and instrumentalities to deceive consumers and was an unfair practice because it "offered a service intended to quickly generate unlimited content for consumer reviews and created false and deceptive written content for consumer reviews."
Rytr settled with the FTC in December 2024 and was banned from offering any service generating customer reviews or testimonials. Current FTC Chairman Andrew N. Ferguson dissented from the Rytr final consent order.
The Commission's subsequent December 2025 order setting aside the Rytr order tracks Chairman Ferguson's dissent. As in Chairman Ferguson's dissent, the FTC explains that the original Complaint failed to state a Section 5 of the FTC Act claim and thus was not in the public interest.
In quoting from the Chairman's dissent, the Commission highlights its current approach: "Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud . . . threatens to turn honest innovators into lawbreakers and risks strangling a potentially revolutionary technology in its cradle."
The Rytr set-aside order notes that "consumers benefit from the invention and availability of new tools, even though almost all tools have both legal and illegal uses."
Inflated AI claims still risk FTC action
The FTC, however, has not completely abandoned regulation of AI products. Instead, it appears to have gone back to Section 5 basics, shifting its enforcement focus to false claims about a company's AI offerings.
In May 2025 congressional testimony before the Subcommittee on Financial Services and General Government of the Committee on Appropriations, Chairman Ferguson described a "[c]ircumspect and appropriate enforcement" approach and touted a handful of recent enforcement actions against companies' false advertising about their AI product capabilities. This kind of false advertising that deceptively inflates the capability of a product is classic Section 5 fraud that the Commission has historically pursued.
This year, the FTC has initiated and settled several of these Section 5 AI enforcement actions. In April 2025, accessiBe agreed to a $1 million settlement over allegations that it misrepresented the ability of its AI-powered tools to make a website compliant with accessibility guidelines.
While the Administration has announced that it will not hamper AI innovation with needless enforcement, it has carved out some areas where it may give AI companies less room to maneuver.
In August 2025, the Commission approved monetary judgments exceeding $20 million against Click Profit and its co-defendants for falsely claiming to use advanced artificial intelligence, among other deceptive practices.
That same month, the Commission approved a final consent order with Workado to settle claims that Workado falsely advertised the accuracy of AI-detection products. The company was ordered to provide evidence of the efficacy they advertised but avoided a monetary judgment. Other actions related to falsely advertising the capabilities of AI products remain pending.
Conclusion
These FTC actions signal that, though the enforcement path may be narrower, the Commission does not intend to ignore AI companies and products in their entirety. Rather, a dual approach to FTC AI enforcement is developing. Although the Commission appears likely to regulate false advertising as always, including AI-related products, companies like Rytr that develop AI products may be less of a focus. As the Commission has stated in the December 2025 order setting aside the Rytr order, "Where actors use AI to violate the law or deceive consumers about the capabilities of their generative AI, they should be held accountable, as the FTC has done and will continue to do."
Continued monitoring of FTC enforcement actions will further confirm the contours of this dual approach, including the kinds of previous AI-related consent orders ripe for reconsideration. Enforcement actions against AI companies for misleading statements could also reveal patterns as to particularly risky statements and typical penalties levied.
Further, while the Administration has announced that it will not hamper AI innovation with needless enforcement, it has carved out some areas where it may give AI companies less room to maneuver. For example, the FTC recently announced an investigation of AI chatbots due in part to concerns about protecting children, and Chairman Ferguson testified before the Subcommittee on Financial Services and General Government of the Committee on Appropriations of Congress in May 2025 that the FTC would pursue enforcement against the use of deep fakes pursuant to the Take It Down Act.
Therefore, while the dual approach described above appears to provide a general framework for the FTC's approach to AI enforcement, companies offering AI products should continue to monitor this constantly evolving area and carefully review the statements they make about their AI capabilities.
Originally published by Reuters Legal News.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.