A recent Upper Tribunal decision highlights an important risk: use of open-source AI platforms can result in the loss of legal professional privilege, exposing otherwise protected advice to disclosure during legal proceedings.
The case
In UK v Secretary of State for the Home Department [2026] UKUT 00081, the Upper Tribunal commented on whether legally privileged material would lose its protected status after being input into an AI platform. The Tribunal reaffirmed a core principle: legal professional privilege depends on confidentiality. If confidential legal advice is shared with a third party in circumstances where it is no longer controlled or protected, privilege may be waived. The tribunal considered that inputting information into an open-source AI tool, such as ChatGPT, is to place it on the internet in the public domain.
The case dealt with two separate sets of immigration proceedings which were brought together for the court to consider the conduct of lawyers following false case citations, generated by AI tools, which had been submitted before the court. During the hearing, one of the solicitors also disclosed that he had uploaded client documents to ChatGPT for the purpose of summarising Home Office decisions.
Legal professional privilege
“Legal professional privilege” protects certain communications from disclosure to the court or to other parties. There are two main types:
Legal advice privilege: protects confidential communications between a client and their lawyer made for the purpose of giving or receiving legal advice.
Litigation privilege: applies where litigation is reasonably in prospect or commenced and protects confidential communications between a client, their lawyer, and third parties, provided they are created for the dominant purpose of obtaining advice or evidence for the litigation.
Privilege belongs to the client, not the lawyer, and can be waived (intentionally or accidentally) by, for example, sharing privileged material with third parties or losing confidentiality.
Why AI changes the risk profile
The tribunal distinguished between the use of specialist and secure AI tools and non-specialist, freely available “open” tools, and used the terminology “open-source” colloquially to mean publicly available or consumer tools. Many widely available AI tools do more than generate answers. They may store prompts, analyse them, or use them to improve future performance. From a legal perspective, that can look very much like sharing confidential advice with a stranger, even if the stranger is a machine. Even if no human reads the material, the lack of control over what happens to the data may be enough to undermine confidentiality and, therefore, privilege. Once privilege is waived, it is usually gone for good, and the material may have to be disclosed in litigation or regulatory proceedings.
Practical guidance for clients
AI tools should be thought of as open meeting rooms rather than locked filing cabinets. If legal advice, draft statements, or supporting documents are entered into an AI tool (whether by way of uploading documents or entering content into prompts) the documents, prompts and outputs may later be treated as disclosable.
For individuals
Individuals increasingly use AI tools to help them understand legal problems, draft complaints, or prepare for disputes. That use carries real risk. To limit the risk:
- Do not upload legal advice or legal documents (including emails from your solicitor, draft witness statements, documents that may form part of the evidence of a case, or settlement correspondence) into public or consumer AI tools.
- Be cautious about “testing” legal arguments with AI before or after speaking to a lawyer. Notes, summaries or chronologies generated by AI may not be privileged, even if later shared with your solicitor.
- Privilege depends on confidentiality from the outset. If confidential information is first shared with an AI platform, it may lose protection permanently.
- If you are unsure, ask your lawyer before using AI in connection with a legal issue.
For organisations and in‑house legal teams
For organisations, the risk is rarely limited to the legal team alone. Privilege can be lost through well‑intentioned use of AI by employees or others who may have access to privileged materials.
- Public or consumer AI tools should not be used to summarise, review or analyse legal advice, investigation material, or dispute‑related documents.
- Clear internal guidance is essential. Staff should be trained to understand when legal professional privilege may apply; how easily it can be lost; and that AI tools should be treated like external third parties.
- Use only approved, secure AI systems where: data is not shared externally; prompts are not used for model training; and contractual confidentiality protections are in place.
Conclusion
The Upper Tribunal’s decision is not binding on higher courts, but its reasoning reflects long‑established principles of confidentiality and privilege. The novelty of AI does not change those principles. Where confidential legal material is shared with a third party in circumstances where control is lost, privilege may be waived.
Whether used by lawyers or clients, public AI platforms should be treated like any other third party when it comes to sharing confidential information. Clients should exercise caution when using AI in connection with legal issues, particularly where litigation or internal investigations are contemplated, as once privilege is lost, it cannot be restored.
The content of this article is for general information only. It is not, and should not be taken as, legal advice. If you require any further information in relation to this article please contact the author in the first instance. Law covered as at April 2026.