- within Transport, Media, Telecoms, IT, Entertainment and Family and Matrimonial topic(s)
- with Inhouse Counsel
- with readers working within the Law Firm industries
The case is a clear warning of the limits of privilege when using generative AI tools
In a February 17, 2026 ruling in U.S. v. Heppner, No. 25 Cr. 503, the United States District Court for the Southern District of New York held that written exchanges between a criminal defendant and a publicly available Generative AI platform (Anthropic's Claude tool) were not protected by either the attorney-client privilege or the work product doctrine. Senior District Judge Jed S. Rakoff observed that neither the Court nor the parties had identified any case to date addressing these issues.
Background
In executing a search warrant at the defendant's home, the FBI seized various documents and electronic devices, including approximately 31 documents memorializing the defendant's communications with Claude. The defendant had created the documents after receiving a grand jury subpoena relating to potential fraud charges against him and coming to understand that he was the target of an FBI investigation. The defendant used Claude to generate reports that outlined defense strategy and potential legal and factual arguments he could make in his defense. Notably, the defendant did not use Claude at the direction of counsel.
The defendant asserted a claim of privilege over these documents arguing that: (i) he had inputted into Claude information he had learned from counsel; (ii) the documents were created for the purpose of speaking with counsel to obtain legal advice, and (iii) the documents were subsequently shared with counsel.
The Government argued that the documents were not protected by either the attorney-client privilege or the work product doctrine.
The Court's ruling on attorney-client privilege
The Court applied the traditional formulation of the attorney-client privilege, i.e. that it protects communications: (i) between client and attorney; (ii) that are intended to be, and in fact were, kept confidential; and (iii) that are for the purpose of obtaining or providing legal advice. Noting that courts construe this privilege narrowly, Judge Rakoff found that the documents did not meet either of the first two elements and likely did not meet the third.
The Court held that the communications were not between the defendant and his counsel because Claude is not a lawyer, observing that “discussion of legal issues between two non-attorneys is not protected,” and concluding that this alone disposed of the privilege claim. The Court also rejected the suggestion that a user's inputs into a third-party AI platform were “more akin to the use of other Internet-based software, such as cloud-based word processing applications,” noting that “the use of such applications is not intrinsically privileged” and privilege requires “a trusting human relationship.” In the case of the attorney-client privilege, this meant “a relationship with a licensed professional who owes fiduciary duties and is subject to discipline.”
The Court also held that the communications were not confidential, both because the defendant communicated with a third-party AI platform and because Claude users consent to a privacy policy that puts them on notice that Anthropic collects data on both users' inputs and Claude's outputs and may disclose personal data to third parties. On that basis, the Court concluded that the defendant had no reasonable expectation of confidentiality in his communications with Claude.
The Court viewed the final element as a closer call, given the argument that the defendant used Claude to help prepare for discussions with counsel. The Court observed that, had counsel directed the defendant to use Claude, the platform “might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of attorney-client privilege”. But because counsel did not direct the use of Claude, and the defendant communicated with it “of his own volition,” what mattered was whether the defendant intended to obtain legal advice from Claude itself. The Court observed that Claude disclaims providing legal advice and directs users to consult qualified counsel. Finally, the Court also observed that sharing Claude's output with counsel did not subsequently render the documents privileged.
The Court's ruling on work product
The Court also rejected the defendant's assertion of the work product doctrine. The doctrine provides qualified protection for materials prepared by or at the behest of counsel in anticipation of litigation or for trial. Its purpose is to protect counsel's “mental processes” and preserve a zone of privacy for legal theories and strategy.
The Court ruled that the documents were not protected by the work product doctrine because—even assuming they were created in anticipation of litigation—they were not prepared by or at the direction of counsel and thus did not reflect defense counsel's strategy. There was no dispute that defendant acted on his own when creating the documents, as opposed to at the behest of counsel.
Judge Rakoff also declined to follow, and explicitly disagreed with, a 2021 S.D.N.Y. Magistrate Judge decision that concluded, in relevant part, that the work product doctrine is not limited to materials prepared by or at the direction of an attorney. The Court reasoned that such an approach risked untethering the doctrine from its core rationale of protecting lawyers' mental impressions and strategy.
Comment
This case is a clear warning that “client-side” use of consumer or public Generative AI tools can create non-privileged discoverable material and can undermine confidentiality arguments that are core to legal privilege doctrines.
Three practical takeaways stand out:
- Under Heppner,a public Generative AI platform is a third party, not counsel, and accordingly, communications with it will not be treated as attorney-client communications. The position may be different where the platform is used at the direction of counsel but, even then, there is no guarantee that inputs or outputs will be protected by privilege.
- While users may well have an additional expectation of confidentiality when employing an enterprise version of a Generative AI tool – since, unlike consumer versions, enterprise versions often segregate and secure inputs and do not use them for training purposes – in light of Heppner, users should still take care. Confidentiality was only one of several bases for the Court's ruling and many enterprise versions still reserve the right to disclose information in response to legal requests. To maximize potential protection, users should employ AI tools for legal purposes at the direction of and with the assistance of counsel, ideally documenting that the use is in anticipation of litigation.
- Clients should not assume that inputs to or outputs from a public AI tool will be protected by the work product doctrine – particularly where the tool has been used independently of counsel. In its ruling, the Court acknowledged that “the work product doctrine may apply to materials generated by non-lawyers,” but did not discuss the parameters of such an application. Different scenarios might present stronger work product arguments, including where the client used a private version of AI (at counsel's direction) with restrictions on training or further disclosure of the inputs. Indeed, in the civil context, Federal Rule of Civil Procedure 26(b)(3)(A) explicitly states that “[o]rdinarily a party may not discover documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative (including the other party's attorney, consultant, surety, indemnitor, insurer, or agent)” (emphasis added). And “[g]enerally . . . work-product privilege is waived only if disclosure to the third party substantially increases the opportunity for potential adversaries to obtain the information” and typically not where the third-party and disclosing party share an alignment of interests. Cellco P'ship v. Nextel Commc'n, Inc., 2004 WL 1542259, at *1 (S.D.N.Y. July 9, 2004) (internal citations omitted).
Although this appears to be the first court ruling to address the question directly, it tackles issues which were prefaced by earlier guidance from a number of U.S. institutions, including the ABA's Formal Opinion 512, the AAA‑ICDR protocols, the SVAMC Guidelines, and various judges' standing orders (which we discussed here), all of which cautioned against a use of Generative AI tools that would result in a waiver of confidentiality.
Courts will likely soon be asked to address the above and other adjacent questions, such as how privilege applies when counsel directs the use of a tool, whether enterprise configurations offering enhanced confidentiality protections affect that analysis, and what disclosures (if any) parties or tribunals should expect when AI tools are used in seeking or providing legal advice or in anticipation of litigation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]