ARTICLE
11 March 2026

Upper Tribunal Observes That Uploading Confidential Documents Into Open-source AI Tools Waives Client Confidentiality And Legal Privilege

KL
Herbert Smith Freehills Kramer LLP

Contributor

Herbert Smith Freehills Kramer is a world-leading global law firm, where our ambition is to help you achieve your goals. Exceptional client service and the pursuit of excellence are at our core. We invest in and care about our client relationships, which is why so many are longstanding. We enjoy breaking new ground, as we have for over 170 years. As a fully integrated transatlantic and transpacific firm, we are where you need us to be. Our footprint is extensive and committed across the world’s largest markets, key financial centres and major growth hubs. At our best tackling complexity and navigating change, we work alongside you on demanding litigation, exacting regulatory work and complex public and private market transactions. We are recognised as leading in these areas. We are immersed in the sectors and challenges that impact you. We are recognised as standing apart in energy, infrastructure and resources. And we’re focused on areas of growth that affect every business across the world.
The tribunal also issued a firm reminder that lawyers who cite false cases are breaching professional obligations and wasting judicial time.
United Kingdom Technology
Ajay Malhotra’s articles from Herbert Smith Freehills Kramer LLP are most popular:
  • in Europe
  • in Europe
  • in Europe
  • in Europe
  • in Europe
  • in Europe
  • with readers working within the Healthcare and Retail & Leisure industries
Herbert Smith Freehills Kramer LLP are most popular:
  • within Transport, Media, Telecoms, IT, Entertainment and Family and Matrimonial topic(s)
  • with Senior Company Executives, HR and Inhouse Counsel

The Upper Tribunal of the Immigration and Asylum Chamber (the tribunal) has criticised legal representatives in two separate cases for their use of AI: UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC). The judgment also makes important points relating to privilege and supervision.

While the tribunal acknowledged that the use of legal AI programmes by properly trained professionals was a step forward in legal practice for properly focused legal research, and in other contexts such as large disclosure exercises, there are dangers which must be guarded against.

In relation to privilege, the court observed that uploading confidential documents into an open-source AI tool such as ChatGPT placed this information in the public domain, and so breached client confidentiality and waived legal privilege. The court also noted that closed source AI tools which did not place information in the public domain, such as Microsoft Copilot, could be used for tasks such as summarising without these risks. It is not clear what submissions were made on this issue. However this is an important point and will no doubt receive further detailed analysis and judicial consideration in due course.

We believe this to be the first reported case in England and Wales to consider the possible impact on privilege of using public AI tools. Questions of privilege and AI are also beginning to reach the courts of other jurisdictions. See for example our recent post on the New York case of US v Heppner No 25 Cr 503 (SDNY), in which the court found that client chats with Anthropic's Claude tool were not privileged. See also here for further reflections on navigating privilege when using AI.

In relation to supervision, the tribunal made it clear that a solicitor who delegates their work remains responsible for the supervision of that work and ensuring its accuracy. Failing to ensure that fee-earners under supervision were aware of the dangers of using non-specialist AI for legal research, or failing to undertake appropriate checks, was likely to result in a referral to the Solicitors Regulation Authority (SRA) or other professional body.

This judgment also demonstrates that, despite the clear warnings given to lawyers by the Divisional Court in the Ayinde case (considered here), courts at all levels continue to be confronted with fake citations, with around fifty such cases have now been reported in England and Wales.

The Civil Justice Council has recently published an interim report and consultation on the use of AI for preparing court documents. It makes various proposals about whether rules are needed to govern the use of AI, concluding that the position will depend on the court documents involved. For more information, see our blog post here. In the meantime, this particular tribunal has already changed its practice. The claim form by which judicial review is sought has been amended to require a legal representative to confirm by a statement of truth that any authority cited within the form or in any documents appended to it: (a) exists; (b) may be located using the citation provided; and (c) supports the proposition of law for which it is cited. Other forms and directions are to be similarly amended. A legal representative who signs such a statement in a case in which false authorities are cited should expect to be referred to their regulatory body.

The judgment in this case was handed down in November 2025 but has only recently been published.

Background

These two cases came to attention when, as in the recent Ayinde case, the tribunal exercised its Hamid jurisdiction. The Hamid jurisdiction gives a court or tribunal the power to ensure that lawyers conduct themselves according to proper professional standards. The cases were heard separately but the tribunal gave a combined judgment.

UK (anonymity order made) v SoS for the Home Department

The first hearing was held to decide whether it was appropriate to refer an accredited adviser to his regulator, the Immigration Advice Authority, for investigation. The adviser had been responsible for drafting grounds of appeal which referred to a case which was not available on BAILII and the citation for which was to a case of no relevance.

The adviser initially said that the reference arose as a result of human error and was not due to the use of an AI large language model such as ChatGPT. He subsequently accepted that the case was an AI creation, and said that it had occurred unknowingly, perhaps through his inadvertent use of the "AI mode" of a Google search.

In the course of explaining his use of ChatGPT, the adviser said that he had put draft emails to clients explaining Home Office decisions into ChatGPT to try to improve them and had also uploaded Home Office decision letters to ChatGPT to summarise them for clients.

R (on the application of Munir) v SoS for the Home Department

The second case concerned an application for judicial review of an immigration decision. The grounds for judicial review cited a number of authorities that the judge was unable to find. The judge asked the compliance officer for the law firm to identify the author of the grounds.

The compliance officer explained that the grounds had been drafted by a part-time trainee lawyer who was working at the firm under his own supervision. He himself had signed the statement of truth and was identified on the claim form as the claimant's legal representative. He said that the grounds were based on an outdated precedent, practitioner blogs and personal notes. The trainee had not verified the references.

Decision

The tribunal began by noting that it could not afford to have its limited resources absorbed by legal representatives who placed false information before the tribunal. The citation of non-existent cases sent judges on a fool's errand at the expense of other judicial business and was not in the interests of justice. Time spent on applications containing false legal information also risked a loss of public confidence in the tribunal's process. The primary duty of regulated lawyers was to the court and tribunal, and to the cause of truth and justice. That duty was not discharged by a professional representative who knowingly or recklessly placed false information before the tribunal or failed to supervise the work of those for whom they were responsible.

Despite the guidance in Ayinde to lawyers on their obligations when using AI,and the Law Society's publication of updated guidance on Generative AI, the tribunal reported that it had seen a considerable increase in the citation of fictitious authorities in the second half of 2025. In order to curb the trend, the claim form by which judicial review was sought in the tribunal had been amended as set out in the introductory section above.

The tribunal then turned to the cases before it.

UK (anonymity order made) v SoS for the Home Department

The tribunal began by noting that the danger in using AI for legal research was not confined to generative AI models such as ChatGPT. The use of Google AI for legal research was also likely to generate false results.

The tribunal acknowledged that the use of legal AI programmes by properly trained professionals was a step forward in legal practice for properly focused legal research, and in other contexts such as large disclosure exercises. However any practitioner using non-specialist AI to undertake research or drafting must undertake checks to ensure information was accurate. Anyone responsible for legal practice at a firm of solicitors or regulated legal advisers needed to be aware of the pitfalls and the need to warn staff about the dangers.

No referral would be made to the Immigration Advice Authority or to the SRA, but this was only because the adviser had already referred himself. If he had not done so, the tribunal would have made such a referral.

The tribunal also observed that, in its view, putting client letters and decision letters from the Home Office into an open-source AI tool such as ChatGPT had the effect of placing this information on the internet in the public domain. This breached client confidentiality and waived legal privilege. Any regulated legal professional or firm that did this would need to inform their regulator and the Information Commissioner, which the adviser said he would do in this case. The court noted that closed source AI tools which did not place information in the public domain, such as Microsoft Copilot, could be used for tasks such as summarising without these risks.

R (on the application of Munir) v Secretary of State for the Home Department

The tribunal considered this case to be more about supervision and the obligation to ensure that a tribunal was not misled than the naïve use of generative AI. It did not matter how the citation errors had come about. The point was that the qualified legal professional with conduct of the matter was expected to ensure that documents were checked, errors identified and only accurate documents sent to the tribunal. Failing to carry out checks was wasteful of both the tribunal's time and the opponent's time. None of this was in the interests of justice or of clients.

A supervisor who failed to ensure that the work of a more junior fee-earner did not contain false cases or citations was likely to be more culpable than a lawyer who failed to ensure that their own work was free from hallucinations. This was because a supervisor, as well as failing the tribunal, the public and the client, was also failing to aid the development of more junior lawyers.

The fact that wrong citations were provided for each of the authorities at issue was more problematic than the lawyer concerned had accepted. The provision of incorrect citations had led the judge on a fool's errand and wasted judicial time.

The tribunal recorded a number of other concerns about the lawyer's evidence and procedures, including his lack of understanding of the extent to which AI was available in the modern world (and the fact that anyone with access to Google had access to AI) and a lack of clarity as to how records were kept of work done by individual fee-earners, meaning that there might be other unidentified files on which the trainee might have worked where he may have relied on false citations generated by AI.

The tribunal held that the lawyer must be referred to the SRA. The SRA would then decide what action to take.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More