- in United States
- within Insolvency/Bankruptcy/Re-Structuring topic(s)
A New Playbook for § 101? The USPTO's Guidance
on Using Technical Evidence
Michael Burger
Section 101 eligibility remains one of the most unpredictable and
frequently contested areas of U.S. patent practice, particularly
for software, artificial intelligence, and machine learning. USPTO
Director John Squires has identified bringing clarity to § 101
as a primary objective of his tenure, emphasizing the need to
reduce inconsistent and confusing eligibility determinations that
can hinder innovation. In furtherance of that goal, Director
Squires issued two memoranda in December 2025 addressing how
technical evidence can be used by applicants, and how it must be
evaluated by Examiners, in the context of § 101 rejections.
For applicants, the guidance provides a practical roadmap for
responding to § 101 rejections by linking claim language to
technological improvements disclosed in the specification.
Continue Reading
Out of the Shadow Library: Fair Use and AI Training
Data
Julie Albert, Jasmine Boyer (Law Clerk)
Since the launch of the first Large Language Models (LLMs), a wave
of copyright litigation has been initiated by authors, musicians,
and news organizations alleging that their works were
misappropriated to build today's most powerful generative AI
tools. In response, AI companies have asserted that such use is
non-infringing fair use. These lawsuits target a spectrum of
alleged copyright infringement. Some lawsuits focus solely on the
unauthorized use of works as training inputs, others focus on the
model's ability to generate allegedly infringing outputs, and
many premise liability on both behaviors.
Continue Reading
Our Take on AI: February 2026
California Eliminates the "Autonomous AI"
Defense: California's Assembly Bill 316 took effect
January 1, 2026, prohibiting defendants who "developed,
modified, or used" an AI system from arguing that "the
artificial intelligence autonomously caused the harm" in civil
actions. The law responds to arguments like Air Canada's
unsuccessful attempt to characterize its customer service chatbot
as a "separate legal entity" responsible for providing
inaccurate information to a passenger. Importantly, AB 316 does not
create strict liability—defendants may still contest
causation, foreseeability, and comparative fault. However, the law
has significant implications for the AI supply chain, as it applies
to foundation model developers, fine-tuners, integrators, and
deployers alike. Organizations using third-party AI tools should
review their vendor agreements, particularly indemnification
provisions and limitation of liability clauses. Read more about
this development here: "California Eliminates the 'Autonomous
AI' Defense: What AB 316 Means for AI Deployers."
China Proposes Regulations on Human-Like AI Services: On December 27, 2025, the Cyberspace Administration of China released draft regulations governing human-like interactive AI services. The proposed rules require providers to ensure AI-generated content conforms to China's core socialist values, implement user protections including intervention measures for users showing signs of emotional distress or addiction, and clearly label AI-generated content. The draft includes specific protections for minors (requiring guardian consent for emotional companionship services) and elderly users (requiring notification to emergency contacts if threats to life, health, or property are detected). Providers must also conduct security assessments when launching new services, implementing major technological changes, or reaching user milestones such as 1 million registered users. Read more about this development here: "China's Draft Regulations on Human-like AI: Key Takeaways."
The AI Enforcement Seesaw: Federal Retreat Meets State Advance: Recent developments highlight a bifurcating regulatory landscape for AI. On December 19, 2025, New York Governor Kathy Hochul signed the RAISE Act, requiring frontier AI developers to publish safety protocols and report safety incidents within 72 hours, with penalties up to $1 million for first offenses. Days later, the FTC voted 2-0 to vacate its consent order against AI writing tool Rytr LLC, citing the Trump Administration's AI Action Plan. The FTC's decision signals a shift from "potential harm" to "actual harm" as the threshold for AI enforcement—the Commission found allegations that a tool "might" be used deceptively insufficient without evidence of actual misuse. Meanwhile, states continue advancing: Texas's Responsible AI Governance Act took effect July 2025, Colorado's AI Act becomes effective February 2026, and California's Transparency in Frontier AI Act takes effect January 2026. Nearly two dozen state attorneys general sent a letter urging the FCC not to preempt state AI laws. For AI deployers, federal pullback should not be mistaken for regulatory relief—compliance obligations are multiplying at the state level. Read more about this development here: "The AI Enforcement Seesaw: Federal Retreat Meets State Advance."
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.