- within Technology topic(s)
- within Insolvency/Bankruptcy/Re-Structuring topic(s)
2026 United States AI
Compliance: State Laws Take Effect Amid Federal
Uncertainty
Joe Cahill
Multiple significant state AI laws became effective January 1,
2026. California's Transparency in Frontier AI Act (SB 53),
signed September 29, 2025, requires developers of large AI models
trained using greater than 10²⁶ FLOPS to publish risk
frameworks, report critical safety incidents within 15 days, and
implement whistleblower protections, with penalties up to $1
million per violation. Texas's Responsible AI Governance Act
(HB 149), signed June 22, 2025, prohibits AI systems designed for
"restricted purposes," including encouragement of
self-harm, unlawful discrimination, and CSAM generation, with
penalties ranging from $10,000–$12,000 for curable violations
to $80,000–$200,000 for uncurable violations; no private
right of action exists. Illinois's amendment to the Human
Rights Act (HB 3773), signed August 9, 2024, makes it a civil
rights violation to use AI for employment decisions without notice
to employees or in a manner that discriminates against protected
classes, with a private right of action available. Looking ahead,
the Colorado AI Act (SB 24-205)—the first comprehensive U.S.
statute targeting "high-risk" AI systems—takes
effect June 30, 2026 (delayed from February 1, 2026, by SB
25B-004), requiring impact assessments, consumer disclosures, and
reasonable care to prevent algorithmic discrimination.
At the federal level, the regulatory environment remains in
flux. The only standalone federal statute substantively regulating
AI systems enacted to date is the TAKE IT DOWN Act (signed May 19,
2025), which requires platforms to establish notice-and-removal
processes for non-consensual intimate imagery, including
AI-generated deepfakes, by May 19, 2026. President Trump revoked
Biden's Executive Order 14110 on January 20, 2025, eliminating
prior AI safety requirements. On December 11, 2025, President Trump
signed a new executive order titled "Ensuring a National
Policy Framework for Artificial Intelligence," which proposes
to preempt state AI laws deemed inconsistent with federal policy
and specifically names the Colorado AI Act. For more information
and implications therefrom, see ourprior article. Until the relevant legal
challenges are resolved, state laws remain enforceable.
Organizations should continue to comply with applicable state AI
requirements while monitoring the Commerce Department's
evaluation of state AI laws, due March 11, 2026.
EU AI Act in Flux: Key Developments to Watch in
2026
Joe Cahill
On November 19, 2025, the European Commission published the Digital
Omnibus, a package of legislative proposals to simplify the GDPR,
AI Act, and Data Act. The most significant change for businesses is
a restructured compliance timeline for high-risk AI systems. The
proposal replaces the fixed August 2, 2026, deadline with a
conditional timeline tied to the availability of harmonized
technical standards: Obligations would apply six months after the
Commission confirms compliance support is available for Annex III
systems (with a backstop of December 2, 2027) and twelve months for
Annex I product-regulated systems (backstop of August 2, 2028). The
proposal also amends the GDPR to facilitate AI development. A new
Article 88c would explicitly confirm that legitimate interest is a
valid legal basis for processing personal data to develop and
operate AI systems, provided controllers implement appropriate
safeguards including data minimization and an unconditional right
for data subjects to object. Additionally, new provisions would
permit incidental processing of special category data that
residually appears in training datasets despite filtering efforts,
subject to technical safeguards, and would allow processing of
sensitive data for bias detection and correction.
The proposals are now before the European Parliament and
Council. If negotiations proceed smoothly, final adoption could
occur by late 2026, with implementation potentially by mid-2027.
However, the proposals have received a mixed political
reception—civil society groups and center-left members of the
European Parliament have criticized the GDPR changes as weakening
data protection, which may complicate negotiations. Critically, the
AI Act amendments must be adopted before August 2, 2026, for the
extended high-risk deadlines to take effect; otherwise, the
original compliance dates will apply. Companies should continue
preparing for compliance under existing deadlines while monitoring
the legislative process closely.
Inside the DOJ's New AI
Litigation Task Force
Cailyn Knapp,Bradley Bennett
On January 9, 2026, the United States Department of Justice
("DOJ") announced the creation of an Artificial
Intelligence Litigation Task Force ("Task Force") through
an internalmemorandum. The Task Force's primary
mandate is to challenge state laws regulating artificial
intelligence. Its creation was directed by the President in a
December 11, 2025,Executive Ordertitled "Ensuring a
National Policy Framework for Artificial Intelligence," which
seeks to reduce regulatory compliance costs, particularly for
start-ups and emerging technology companies. The Executive Order
rests on the premise that compliance with a "patchwork"
of state-by-state regulation impedes innovation more than adherence
to a minimally burdensome national standard.
Read morehere.
Amazon v. Perplexity: Can
Website Terms Prevent AI Agent Use?
Coleman Strine
The next generation of AI lawsuits, based on agentic tools, has
begun. In Amazon.com Services LLC v. Perplexity AI, Inc.,
No. 3:25-cv-09514 (NDCA filed Nov. 4, 2025), Amazon has
accused Perplexity of violating federal and California computer
fraud and abuse statutes based on Perplexity's use of AI agents
on amazon.com.
Amazon's allegations are largely based on Perplexity's AI-enabled web browser, Comet, which includes an agentic service that interacts with webpages on behalf of its users. According to Amazon's Complaint, Comet has been used by many individuals to shop on amazon.com. However, Amazon's terms prohibit such use and require automated AI agents to identify themselves, including by disclosing the agent's "identity" in the User-Agent header of all HTTP/HTTPS requests, using the format "Agent/[agent name]." Amazon has characterized Perplexity's actions as "covert" attempts to access its services, partly because Comet does not follow Amazon's identification conventions. As a result, Amazon has brought claims based on (1) the Computer Fraud and Abuse Act (18 U.S.C. § 1030, et seq.), alleging that Perplexity accessed Amazon's computers without authorization and with intent to defraud, including "by hiding its agentic activity and violating Amazon's Conditions of Use," and (2) the California Comprehensive Computer Data Access and Fraud Act (California Penal Code § 502), alleging that Perplexity knowingly accessed Amazon's data and used its services without permission.
Along with its Complaint, Amazon filed a motion for preliminary injunction. In response, Perplexity characterized the suit as an exclusionary tactic intended to clear the way for Amazon's own agentic AI products. The motion is set to be heard on February 13.
As more AI agents come online, many companies have become
concerned with whether, —and how—they may prevent these
agents from using their web services. The outcome of this case will
provide useful insight into whether website terms of use, and
computer fraud and abuse statutes, are appropriate tools to prevent
such use.
The AI Enforcement Seesaw:
Federal Retreat Meets State Advance
Parker Hancock
On December 19, New York Governor Kathy Hochul signed the RAISE
Act, making New York the first state to enact major AI safety
legislation after President Trump's December 11 executive order
calling for federal preemption of state AI laws. Three days later,
the FTC voted 2-0 to vacate its 2024 consent order against Rytr
LLC, an AI writing tool, explicitly citing the Trump
Administration's AI Action Plan.
Same week. Opposite directions. For in-house counsel at companies deploying AI, this juxtaposition captures the new regulatory reality: don't mistake federal pullback for regulatory relief. States are stepping in—and the compliance obligations are multiplying, not simplifying.
Read morehere.
New York Courts' AI
Committee Annual Report Highlights Accomplishments and Establishes
Recommendations for AI-Assisted Filings
Ben Bafumi
The New York Advisory Committee on Artificial Intelligence and the
Courts, established in 2024, recently issued its 2025 Annual
Report, addressing both the progress made in the past year and
proposals for the future. The Committee drafted (with adoption) an
Interim Policy on the Use of AI Within the UCS (Unified Court
System), which mandates AI training for all judges and staff,
restricts unsuitable uses of AI (e.g., general-purpose AI for legal
writing and research), and limits generative AI use to approved
tools (e.g., ChatGPT and Microsoft Copilot). Notably, the Committee
also proposed a statewide policy disfavoring any prohibition of
generative AI to assist with preparing court documents, accompanied
by a model rule highlighting such allowance of AI so long as the
use accords with 22 NYCRR Part 130 and the Rules of Professional
Conduct (which require truthfulness and accuracy of all court
submissions). The rule would negate the need to disclose whether AI
was used in preparing court filings and would allow Judges to
implement their own "part rules" governing AI use in
their courtrooms. The Administrative Board approved the policy and
rule for public comment, which closed at the end of December
2025.
Additionally, the Report recommends expansion focused on responsible deployment, equity, and education. Strategic initiatives include advancing narrowly scoped pilot programs—such as a Nassau County Family Court intake assistant for pro se litigants—and exploring a discrete-use chatbot on the New York Courts website. The Report also considers leveraging AI to streamline records management and searchability, clerk workflows, and secure translations. The Report refines ethics guidance, for example, by suggesting mandatory CLE trainings on AI use, bias, and accountability.
The Report makes clear that AI use is welcome—and now
almost expected—in New York courts, but it must be used
competently and verifiably through diligent attorney oversight.
Although there is no court- or system-wide prohibition on
AI-assisted filings, submissions must still comply with existing
state law and ethical duties. Verification protocols,
confidentiality safeguards, and active human review for any
AI-assisted work product is paramount. While courts are likely to
favor vetted, secure research environments, even these specialized
platforms require ongoing validation and review. Judicial
management of AI's use in courts, including in official court
filings, will only increase.
China's Draft Regulations
on Human-Like AI: Key Takeaways
Enrico Picozza
On December 27, 2025, the Cyberspace Administration of China (CAC)
releaseddraft regulationsdesigned to govern the
burgeoning field of human-like interactive AI services. The
proposed rules aim to strike a balance between fostering innovation
and ensuring that these services operate safely and align with
national values within mainland China. Key requirements for service
providers (Providers), among other things, include upholding core
socialist values, implementing robust user protections, meeting
government reporting standards, and ensuring high-quality training
data.
Read morehere.
Quick
Links
For additional insights on AI, check out Baker Botts' thought
leadership in this area:
- International Regulators Draw the Line on AI-Generated Explicit Imagery:Senior AssociateParker Hancockwrites on the red line that has emerged and what this means for deployers.
- Beyond the Factory Floor: Managing Export Compliance in an Era of AI, Visibility and Agile Talent:Special CounselJason Wilcoxexplains how emerging technologies in manufacturing are creating new export compliance challenges under strict U.S. regulatory regimes like EAR and ITAR.
- California Eliminates the "Autonomous AI" Defense: What AB 316 Means for AI Deployers:Senior AssociateParker Hancockanalyzes the consequences of this decision.
- AI Counsel Code:Stay up to date on all the artificial intelligence (AI) legal issues arising in the modern business frontier with hostMaggie Welsh, Partner at Baker Botts and Co-Chair of the AI Practice Group. Our library of podcasts can be foundhere, and stay tuned for new episodes coming soon.
To access all previous issues of
this newsletter, please visit ourAI Legal Watch Library.
For additional information on our Artificial Intelligence practice,
experience and team, please visit our pagehere.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.