AI Conversations and Attorney-Client Privilege: What United States v. Heppner Means for Your Organization

Brandon S. Zarsky, Antony B. Kamel and David W. Badie

Article

Every time an employee types a legal question into ChatGPT or Claude, they may be creating a discoverable record – one your attorneys cannot protect. A landmark February 2026 ruling from a federal court in New York just made that risk impossible to ignore.

On February 10, 2026, Judge Jed Rakoff of the United States District Court for the Southern District of New York issued a first-of-its-kind ruling that should prompt every organization using artificial intelligence to reexamine its policies. In United States v. Heppner, No. 25-cr-00503-JSR, Judge Rakoff held that documents generated using a consumer AI chatbot are protected by neither attorney-client privilege nor the work product doctrine.

Background

Bradley Heppner, former CEO of a financial services company, was arrested in November 2025 on charges of securities fraud, wire fraud, conspiracy to commit fraud, making false statements to auditors, and falsification of records in connection with an alleged scheme to defraud investors. After receiving a grand jury subpoena and engaging defense counsel, Heppner used the non-enterprise consumer version of Anthropic’s Claude to prepare approximately thirty-one documents outlining his defense strategy and potential legal arguments. Some of the information Heppner entered into the AI tool was information he had learned from his attorneys during the course of their representation. Heppner subsequently transmitted these AI-generated documents to his lawyers.

During the execution of a search warrant at Heppner’s residence, federal agents seized electronic devices containing these AI-generated documents. The government moved for a ruling that the documents were not privileged, and Judge Rakoff agreed, ruling from the bench that both attorney-client privilege and work product protection were unavailable.

The Court’s Ruling on Attorney-Client Privilege

The attorney-client privilege protects communications between a client and an attorney that are intended to be, and in fact were, kept confidential, for the purpose of obtaining or providing legal advice. The government argued, and the Court found, that the AI-generated documents failed to satisfy multiple elements of this test. 

First, the communications were not between Heppner and his counsel. An AI tool is not an attorney. It holds no license to practice law, owes no duty of loyalty or confidentiality, and is not subject to professional responsibility rules. Judge Rakoff found that discussing legal matters with an AI platform is legally no different from talking through a case with a non-attorney friend, and does not create privileged communications.

Furthermore, the communications were not made for the purpose of obtaining legal advice. The government highlighted that Anthropic’s own public materials state that Claude is designed to choose responses that do not give the impression of providing specific legal advice. The AI tool’s terms of service explicitly disclaim the provision of legal services and advise users to consult with qualified lawyers. A user cannot claim privilege based on seeking legal advice from a tool that expressly disclaims providing it.

Moreover, the communications were clearly not confidential. Perhaps most significantly, Judge Rakoff found that Heppner had no reasonable expectation of confidentiality when using a consumer AI platform. Anthropic’s privacy policy in effect at the time expressly stated that the company collects data on user prompts and outputs, may use this data to train its AI models, and may disclose data to governmental regulatory authorities and third parties. The court emphasized that the tool “contains a provision that any information inputted is not confidential.” Voluntary disclosure of information to a third party that does not maintain confidentiality waives any privilege that might otherwise apply.

Even though Heppner sent these AI-generated documents to counsel after the fact, this did not create privilege. Heppner’s attorneys argued that by transmitting the AI-generated documents to his legal team, the documents became privileged. Judge Rakoff rejected this argument, applying the well-settled principle that sending preexisting, non-privileged documents to an attorney does not retroactively cloak them with privilege.

The Court’s Ruling on Work Product Doctrine

The work product doctrine provides qualified protection for materials prepared by or at the behest of counsel in anticipation of litigation or for trial. Heppner’s counsel conceded that Heppner had prepared the AI-generated documents on his own initiative, and not at the direction of his attorneys. This concession proved fatal to the work product claim. Because Heppner acted independently rather than at counsel’s direction, the documents could not qualify as attorney work product.

Key Takeaways for Clients and Organizations

While this ruling arose in a criminal prosecution and is based on traditional privilege principles rather than novel AI-specific rules, its implications extend far beyond the facts of this case.  Additionally, although this is the first ruling of its kind, we anticipate that other courts will follow suit. 

  1. Communications with consumer AI tools are unlikely to be privileged.Consumer AI platforms are not attorneys, do not provide legal advice, and do not maintain confidentiality. Inputs to these tools may be used for model training and may be disclosed to government authorities. These characteristics are fundamentally inconsistent with the requirements for attorney-client privilege.
  2. AI platform terms of service matter. Courts will examine the privacy policies and terms of service of AI tools when evaluating privilege claims. Organizations and individuals should carefully review these policies before using any AI tool to analyze confidential or sensitive legal matters.
  3. Sharing privileged information with AI may waive privilege over the underlying communications. The implications of this ruling extend beyond the AI-generated outputs themselves. Heppner input information he had received from his attorneys into the AI tool. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may waive the privilege over the original attorney-client communications. This is perhaps the most significant and concerning aspect of the decision for organizations.
  4. The consumer-versus-enterprise distinction may be critical. The ruling focused on a consumer AI platform with terms of service that disclaim confidentiality. Enterprise AI agreements that contractually guarantee input confidentiality, prohibit use of data for model training, and restrict disclosure to third parties may support a different privilege analysis. Organizations should ensure their enterprise AI contracts explicitly address confidentiality protections if they intend to use these tools for legal analysis.
  5. Work product protection requires attorney direction. If AI-assisted legal research or analysis is conducted at the direction of counsel as part of the attorney-client relationship, both privilege and work product protections are more likely to apply. Self-directed research using AI tools, even if later shared with attorneys, does not receive work product protection.

Recommendations

  1. For attorneys: Advise clients explicitly that anything they input into a consumer AI tool may be discoverable and is almost certainly not privileged. Consider including this guidance in engagement letters and making it part of client onboarding.
  2. For in-house counsel and legal departments: Audit your organization’s AI usage policies through a privilege lens. Most AI policies focus on data security, accuracy, and intellectual property, but few address the privilege implications of employees using AI to analyze legal questions. Establish clear protocols requiring that AI-assisted legal analysis flow through the legal department when privilege protection is important.
  3. For businesses and organizations: Distinguish between consumer and enterprise AI deployments. If your organization needs to use AI for matters involving confidential or privileged information, ensure you have enterprise-grade agreements that provide contractual confidentiality protections.

Conclusion

United States v. Heppner represents the first federal court ruling directly addressing whether privilege attaches to materials generated through consumer AI platforms. The decision confirms that consumer AI tools are not confidential channels, and using them to analyze legal matters creates discoverable records. As AI adoption continues to accelerate, organizations should act now to ensure their AI governance programs are structured to preserve the legal protections that matter most.  At Frier Levitt, we assist clients in reviewing AI usage policies, negotiating enterprise AI agreements, and developing governance frameworks that protect privilege and other legal interests.


Frequently Asked Questions About AI, Privilege, and Work Product

Are AI chatbot conversations protected by attorney-client privilege?

No. As confirmed in United States v. Heppner (S.D.N.Y. 2026), courts have held that communications with consumer AI tools do not qualify for attorney-client privilege because the AI is not an attorney, does not provide legal advice, and does not maintain confidentiality.

Does enterprise AI offer better privilege protection than consumer AI?

Potentially. Enterprise AI agreements that contractually guarantee confidentiality and prohibit data training use may support a stronger privilege argument, though no court has definitively ruled on this.

What is the work product doctrine, and does it cover AI-generated documents?

The work product doctrine protects materials prepared at the direction of counsel in anticipation of litigation. AI-generated documents created independently by a client — not at an attorney’s direction — do not qualify.