- within Technology topic(s)
- with Inhouse Counsel
- with readers working within the Media & Information and Law Firm industries
When Judge Jed Rakoff ruled in United States v. Heppner (S.D.N.Y. Feb. 17, 2026) that documents a criminal defendant created through exchanges with Anthropic's Claude platform weren't protected by attorney-client privilege or the work product doctrine, the decision generated significant attention across the legal community. Many practitioners read that ruling as a sweeping statement: using AI tools waives privilege. While great for headlines, that is an overstatement of what Heppner actually holds, and the Warner case, which was decided a week earlier in the Eastern District of Michigan, shows why the distinction matters.
The Heppner Decision: Narrower Than It
Appears
In Heppner, the trial judge ruled that documents a
criminal defendant created through his own exchanges with
Anthropic's Claude platform and sent to his attorney afterwards
were protected by neither attorney-client privilege nor the work
product doctrine. The ruling rested on several specific facts.
Heppner used a public consumer AI tool that explicitly disclaims
providing legal advice and whose privacy policy authorizes data
collection, model training, and disclosure to third parties
including government authorities. He did so on his own initiative,
without direction from his counsel. And the government had already
seized the documents pursuant to a search warrant before the
privilege question even arose.
On privilege, the court identified three independent deficiencies: Claude is not a lawyer, so there was no attorney-client communication; the platform's terms defeated any reasonable expectation of confidentiality; and Heppner's purpose was not to obtain legal advice from Claude, which disclaims that capacity. On work product, the court found the documents were not prepared by or at the direction of counsel and did not reflect counsel's strategy. Judge Rakoff noted the analysis might differ if counsel had directed the AI use because the platform could then arguably function as an agent of counsel.
Most importantly, Heppner doesn't hold that using AI tools automatically waives privilege. It holds that a non-lawyer querying a public AI tool which isn't a lawyer and offers no confidentiality, doesn't satisfy the foundational requirement for attorney-client privilege in the first place. Legal privilege requires confidential communication with a lawyer for the purpose of obtaining legal advice. Heppner is important and worthy of some attention, but it is not the final word on lawyers (and those acting at the direction of lawyers) and the content of AI prompts and results. There is still a lot more to analyze on an individual application and individual basis. But as a bottom line, if a party or witness is talking to a machine and not a lawyer, the privilege analysis doesn't even get off the ground.
Warner: The Civil Counterweight
Look back one week. In Warner, a federal magistrate judge
reached a different result in a civil case. A pro se party
had used ChatGPT to prepare legal briefs in anticipation of
litigation. When opposing counsel sought discovery of those
materials, the court the court denied the request, holding the
materials were not discoverable work product under Rule 26(b)(3)
and independently not relevant or proportional under Rule 26(b)(1).
Critically, the court also held that using AI didn't waive work
product protection, because AI tools are "tools, not
persons," and waiver requires disclosure to an adversary or in
a way likely to reach one – a standard that AI use alone
doesn't meet. The court didn't mince words with defense
counsel either, stating that their "preoccupation with
Plaintiff's use of AI needs to abate" and agreeing with
the plaintiff that the request was a "fishing expedition"
that, if endorsed, "would nullify work-product protection in
nearly every modern drafting environment, a result no court has
endorsed."
One key difference here involves civil procedure vs. criminal procedure rules. Rule 26(b)(3) protects materials prepared in anticipation of litigation by a party or its representative – it doesn't require that a lawyer prepare the materials, only that they were created in anticipation of litigation. The pro se litigant's use of AI fell squarely within that protection, and the court saw no reason to treat AI-assisted drafting differently from any other tool a litigant might use to prepare her case.
The Real Distinction: It's Not the AI, It's How
You Use It
This is the critical point most commentary misses. Heppner
and Warner reach opposite conclusions not because one case
says AI can never be privileged while the other says it always is.
They reach opposite conclusions because of the specific
circumstances in which the AI tools were used and the materials
were sought. In Heppner, a represented defendant used a
public AI platform on his own initiative, without counsel's
direction, through a service whose terms disclaimed both legal
advice and confidentiality. Those materials were then seized by the
FBI pursuant to a search warrant. In Warner, a pro
se litigant used AI as part of her own litigation preparation,
and opposing counsel tried to compel production through a discovery
request.
Mr. Heppner's computer and AI activity information was already seized and in the hands of the government, while Ms. Warner was resisting a written discovery request for information in her possession, custody or control. The procedural context matters enormously, and lawyers discussing AI privilege need to know the circumstances under which the materials were created and how they ended up in dispute.
Extrapolating from Warner: Lawyers Using AI
Tools
If a pro se party's use of ChatGPT to prepare
litigation materials qualified for work product protection in a
civil case, the same logic should apply – and arguably
applies even more strongly – when a lawyer uses AI tools. A
lawyer directing the use of an AI tool as part of legal
representation has more deliberation and control than a pro se
litigant. As long as the materials are created in anticipation of
litigation and not disclosed to an adversary, they should receive
the same protection Warner afforded.
The use of the AI tool itself doesn't waive privilege or work product protection. What matters is whether the materials are created in anticipation of litigation and kept confidential. This is the real area where practitioners need to focus, because waiver is a real concern.
The Real Risk: Public and Commercial AI
Tools
There is genuine waiver exposure when using public or
commercial-level AI tools. That's because, as the
Heppner decision went at lengths to mention, the platforms
and their terms make it clear that user information is not private
or secured, and users have no privacy guarantee. Essentially, when
you input confidential client information into ChatGPT or similar
consumer tools, you're disclosing that information to a third
party without any contractual protection or confidentiality
agreement. If that information is later exposed through a data
breach, logging, or litigation (like the ongoing OpenAI New York
class action litigation that has resulted in preservation
obligations for massive volumes of ChatGPT prompts and results for
millions of users) you've potentially waived privilege
through disclosure, not through the mere act of using an AI
tool.
The distinction is crucial: using an AI tool doesn't waive privilege. Disclosing confidential client information through an unsecured channel does.
Practical Implications
Lawyers and businesses using AI in their practice should focus
on:
- Using enterprise AI tools or tools with explicit confidentiality agreements rather than public consumer tools.
- Implementing siloed or secure instances where AI interactions involving legal matters are segregated from general business operations.
- If AI is part of the litigation workflow, counsel should direct its use and maintain clear documentation that materials were created in anticipation of litigation, especially in civil matters where work product protections are broader.
- Not assuming that sharing AI outputs with counsel after the fact creates privilege. Heppner held that non-privileged materials don't become privileged merely because they are later shared with an attorney. The time to protect information is before it enters the AI platform, not after.
- Avoiding disclosure of confidential client information to public AI platforms where you cannot control downstream use or exposure.
- Updating AI governance and acceptable use policies to specify which platforms are approved, what information may be entered, and what protocols apply when AI-generated materials touch on litigation, investigations, or regulatory matters.
The one undeniable realization in both decisions is that AI prompts and results are undeniably ESI, and therefore subject to preservation, civil discovery, criminal search and subpoena production.
Neither case ends the conversation about whether AI use is categorically safe or unsafe for privilege. It's that privilege analysis turns on the same factors it always has: whether there's a confidential communication with a lawyer for the purpose of obtaining legal advice, whether materials are created in anticipation of litigation, and whether confidentiality is maintained. The AI tool itself is neutral, and AI is not a lawyer – it's a powerful technology, but it is still a technology application like Westlaw or Google or an email or text messaging platform. How you use it, who is using it, and why determines whether privilege applies. Then, assuming it IS privileged, the efforts you take to secure the content from publication or disclosure determines whether your privilege is waived.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.