ARTICLE
16 July 2025

(Pro) Health Issues Lead To Hallucinations...

LS
Lewis Silkin

Contributor

We have two things at our core: people – both ours and yours - and a focus on creativity, technology and innovation. Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.
The case of Pro Health Solutions ltd v ProHealth Inc [2025] 6 WLUK 622 involved an appeal before the Appointed Person (AP) in the Trade Marks Registry, decided on 20 June 2025.
United Kingdom Intellectual Property

The case of  Pro Health Solutions ltd v ProHealth Inc [2025] 6 WLUK 622 involved an appeal before the Appointed Person (AP) in the Trade Marks Registry, decided on 20 June 2025. The dispute centred on trade mark registration and invalidation proceedings, with the appellant, a litigant-in-person, challenging decisions that had favoured the respondent, ProHealth Inc. The appeal was ultimately dismissed, but the case is notable for its detailed consideration of the use of generative artificial intelligence (AI) in legal submissions, particularly the citation of fabricated authorities, otherwise known as a hallucination. Hallucinations are one of the limitations of large language models like ChatGPT, where the model occasionally generates outputs that are factually incorrect and do not correspond to information in the training data.

The proceedings revealed that the appellant, acting without legal representation, had prepared his grounds of appeal and skeleton argument with the assistance of ChatGPT. While the cases cited were real, the documents included purported quotations not found in the actual judgments and summaries that, in several instances, misrepresented the decisions. The appellant candidly admitted to using AI and apologised for the errors. 

The respondent, represented by a trade mark attorney, also encountered difficulties. Although the cases cited in the respondent's skeleton argument were genuine and correctly referenced, the attorney was unable to identify the relevant parts of the judgments that supported his legal propositions. When asked what part of the judgments supported the propositions he made, he could not. He sent an email to the AP after the hearing making the position no better.

Warnings and Guidance on AI Use 

As highlighted by my colleague Fiona  here, AI hallucinations are becoming an increasingly common issue in legal proceedings with sanctions being considered and/or issued against those who mislead the court, most notably in the case of Frederick Ayinde -v- The London Borough of Haringey [2025] EWHC 1040 (Admin) which was referenced in this case by the AP.

While the AP drew a clear distinction between honest mistakes or misunderstandings of the law and the more serious issue of relying on fabricated legal authorities, they still emphasised that all parties, regardless of experience or representation, have a duty not to mislead the tribunal by submitting false or fabricated citations. 

The AP pointed out that there had clearly been a muddying of this distinction here. Nevertheless, the AP refrained from issuing sanctions against either party, although the representative for the Respondent narrowly avoided a referral to the IPReg.

Instead, the AP focused their attention on suggesting that the Registrar consider adopting a practice of including "very clear warnings in any correspondence requesting submissions, skeletons or other documents". These warnings should "set out explicitly the risks of using [AI] for legal research, drafting skeleton arguments or written submissions as well as setting out the potential consequences that may arise if a person puts fabricated citations before the registrar or Appointed Person". 

Implications

Readers will shudder at the thought of finding themselves in a similar situation but, in truth, this decision should have little impact on your conduct in such matters. Ultimately, it should act as no more than a reminder that generative AI is still an insufficient substitute for reliable and verified legal analysis conducted by a trained legal professional.

We look forward to seeing whether the registrar takes the AP up on the suggestion of adopting some more guidance. 

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More