Article
Federal Court Holds AI Communications are Not Protected by Privilege or Work Product
Published: Apr 13, 2026
A recent decision from the Southern District of New York provides one of the first judicial answers to a fast-emerging question: are litigants’ communications with a publicly available generative AI platform protected from disclosure? In United States v. Heppner, the court held that written exchanges between a criminal defendant and a commercial AI chatbot were not protected by the attorney-client privilege or the work product doctrine. The ruling underscores that traditional privilege and work product principles apply squarely to new technologies, and that litigants using publicly available AI tools in investigations or litigation run serious disclosure risks.
Breakdown of the Case
The defendant, a corporate executive, was indicted on charges including securities fraud, wire fraud, and conspiracy. After his arrest, the FBI executed a search warrant and seized, among other things, approximately thirty-one documents memorializing exchanges between the defendant and a generative AI platform. Defense counsel conceded that the defendant created the “AI Documents” in 2025, after receiving a grand jury subpoena and after it was clear he was a target of the investigation. Acting on his own initiative and without any suggestion from his lawyers, he used the platform to prepare reports outlining potential defense strategies. Defense counsel asserted privilege, contending that the defendant created the AI Documents for the purpose of speaking with counsel and later shared them with his attorneys. The government moved for a ruling that the documents were protected by neither privilege nor work product, and the court granted the motion.
Attorney-Client Privilege Analysis
The court applied the familiar three-part test—requiring communications between client and attorney, intended to be and actually kept confidential, and made for the purpose of obtaining or providing legal advice—and found that the AI Documents failed on multiple grounds.
First, the court concluded that communications with an AI platform are “not between a client and his or her attorney,” as no attorney-client relationship can exist with an AI system. The court rejected analogies to neutral software like cloud-based word processors, emphasizing that all recognized evidentiary privileges are tied to a trusting human relationship with a licensed professional.
Second, the court found the communications were not confidential. The platform’s privacy policy stated that the provider collects user inputs and AI outputs, uses that data to train the model, and reserves the right to disclose data to third parties, including governmental authorities. The defendant, therefore, could have no reasonable expectation of confidentiality, and to the extent he included previously privileged information, he waived that privilege by disclosing it to the AI provider. This aspect of the ruling is closely tied to the specific platform at issue—a publicly available AI tool with broad data-collection and disclosure policies. By contrast, subscription-based AI platforms designed for legal professionals, such as Lexis+ AI, Westlaw Precision/CoCounsel, or Harvey, operate under contractual terms that prohibit the use of client data for model training, restrict disclosure to third parties, and are specifically designed to preserve the confidentiality of user communications. The court’s confidentiality analysis would thus likely differ where an attorney or client uses a platform whose terms affirmatively protect the confidentiality of inputs and outputs.
Third, the court held that the defendant did not communicate with the AI for the purpose of obtaining legal advice. Although defense counsel asserted the defendant used the tool to prepare to talk to counsel, counsel conceded they had not directed or suggested the AI’s use. The platform’s own disclaimers—that it is not a lawyer and cannot provide formal legal advice—reinforced this conclusion. The court further held that non-privileged communications do not become privileged simply because they are later shown to counsel.
Work Product Doctrine Analysis
Even assuming the defendant created the AI Documents in anticipation of litigation, the court held that they were not protected work product for two reasons.
First, the documents were prepared solely by the defendant on his own initiative and not “by or at the behest of counsel.” Second, the AI Documents did not reflect counsel’s then-existing strategy or mental impressions at the time they were created. The court also rejected the defendant’s argument based on non-discoverable information provisions in the Federal Rules of Criminal Procedure, holding the rule inapplicable because the AI Documents were seized under a valid search warrant rather than produced in response to discovery requests.
Practical Implications and Recommended Steps
The decision frames the interaction between generative AI and long-standing evidentiary doctrines in clear terms:
-
Privilege requires a human attorney-client relationship.
-
Entering privileged information into an AI platform governed by broad data-collection policy may waive existing privilege.
-
Client-created AI outputs are unlikely to qualify as work product unless prepared at counsel’s discretion and genuinely reflecting counsel’s mental processes.
Applying the principles outlined in this ruling, organizations and individuals should consider the following steps:
-
Employees and clients should be educated not to input legal advice, litigation strategy, or confidential case assessments into publicly available AI platforms that do not contractually guarantee confidentiality. Where use of AI tools is appropriate, clients should rely on subscription-based AI platforms that are designed for legal work, that contractually ensure data confidentiality, and that do not use client inputs for model training.
-
Before any AI use related to sensitive matters, platform privacy policies and data-handling practices should be carefully reviewed to determine whether the platform is contractually committed to maintaining confidentiality and refrains from using data for model training or third-party disclosure.
-
Where AI is used to assist in legal tasks, that use should be structured only under counsel’s direction and supervision so that the resulting materials can more plausibly be tied to counsel’s mental processes.
-
Organizations should implement clear corporate policies governing employee use of generative AI in connection with investigations, regulatory interactions, or litigation.
When responding to investigations or discovery, parties should consider whether seized or collected data sets include AI-generated content and plan review protocols accordingly, recognizing that privilege claims over such materials may face substantial hurdles.