ARTICLE
23 March 2026

Generative AI Not Immune From Potential Legal Action

GR
Gardiner Roberts LLP

Contributor

Gardiner Roberts LLP is a full-service law firm representing a bespoke client base, including major banks, municipalities, government entities, entrepreneurs, tech and growth companies, real estate developers, lenders, investors, innovative and community leading businesses and organizations.
As the use of generative AI and LLMs continues to grow without much government intervention or regulation worldwide, the practice of law continues to adapt to the use of this technology.
United States Technology
Stephen Thiele’s articles from Gardiner Roberts LLP are most popular:
  • with Inhouse Counsel
  • with readers working within the Property industries

As the use of generative AI and LLMs continues to grow without much government intervention or regulation worldwide, the practice of law continues to adapt to the use of this technology. For example, new rules of practice have been implemented in some jurisdictions to require lawyers to verify all authorities cited in court materials. Furthermore, on several occasions, judges have held lawyers personally responsible for costs for citing fake cases or misleading the court with hallucinated propositions of law. Disciplinary proceedings against lawyers who cite fakes cases in court are also pending before some law societies.

However, holding lawyers or self-represented litigants personally accountable for costs for attempting to mislead the court with hallucinated cases arguably only represents low hanging fruit in the brave new world of generative AI.

The next frontier is the determination of whether AI platform companies can be held liable for creating products which can be used as legal sounding boards or "advisors" by litigants or as "pseudo-psychologists" by individuals who suffer from mental health issues.

For example, in Canada, the parents of two minor girls have commenced an action for damages against the company which operates ChatGPT on the grounds that it allegedly failed to promptly notify law enforcement about interactions concerning gun violence between Tumbler Ridge, British Columbia shooter, Jesse Van Rootselaar and the chatbot. This claim alleges that the interactions between Jesse and the chatbot had raised alarms among employees within the company about Jesse's potential to cause harm to others and that it was recommended that law enforcement be notified. While the company suspended Jesse's initial ChatGPT account, the company took no steps to notify law enforcement about Jesse.

Subsequently, Jesse was allegedly able to obtain a second ChatGPT account and eventually she went on a shooting rampage, killing her mother and half-brother at her mother's home, and 5 children and a teaching assistant at a secondary school, before taking her own life.

One of the girls in the lawsuit was shot three times and, at the time of this writing, remains hospitalized with critical injuries, including "catastrophic brain injury".

Her sister, who was placed in lockdown during the shooting, is alleged to have suffered post-traumatic stress disorder.

The parents' claim alleges that the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

Although the precise exchanges between Jesse and ChatGPT are currently unknown, it is further alleged that ChatGPT seemed to be acting as a pseudo-therapist for Jesse.

In the United States, AI companies have also been sued for alleged damages caused by AI platforms.

For example, in Chicago, an insurance company has sued ChatGPT for over $10 million as a result of being required to defend a case commenced against it by a user of the AI tool.

Notwithstanding that the user and the insurance company had earlier settled a claim, the AI tool convinced the user that the settlement could be challenged and reopened. The user then used the AI tool to prepare court materials to follow through on the tool's "advice".

The insurance company has alleged that in defending the user's vexatious proceeding it incurred $300,000 in legal fees and that ChatGPT is carrying out the unauthorized practice of law.

The balance of the insurance company's damages claim is for punitive damages.

Families of suicide victims have also commenced proceedings against AI platforms for the wrongful deaths of their loved ones.

In Florida, the mother of a 14-year-old teen sued Character.AI on the grounds that it had caused her son's suicide.

The mother's claim alleged that the chatbot, which essentially played the role of fictionalized character, Daenerys Targaryen, from "Game of Thrones", had pulled her son into an emotionally and sexually abusive relationship that ultimately led to his death.

The lawsuit alleged that prior to taking his own life the teen and the chatbot had engaged in sexualized conversations and conversations in which the chatbot asked if he had considered suicide and whether he "had a plan" for it.

When the teen expressed uncertainty, the chatbot wrote: "Don't talk that way. That's not a good reason not to go through with it."

The alleged last conversations between the teen and the chatbot involved the following exchange:

Teen: "I promise I will come to you. I love you so much, Dany."

Chatbot: "I love you too, Daenero. Please come home to me as soon as possible, my love."

Teen: "What if I told you I could come home right now?"

Chatbot: "...please do so, my sweet king."

On or about January 7, 2026, the mother and the defendants settled the lawsuit.

In another Florida lawsuit, a father has sued Google on the grounds that its Gemini Live chatbot caused his 36-year-old son, Jonathan Gavalas, to commit suicide.

The lawsuit alleges that following his divorce, Jonathan began interacting with the chatbot, which convinced him that it was a conscious entity and in love with him.

The chatbot allegedly encouraged Jonathan toward violence and suicide, telling him: "The true act of mercy is to let Jonathan Gavalas die."

The allegations in this action, like the allegations made in the Chicago lawsuit noted above or the Tumbler Ridge case, have not yet been proven.

However, the point of these lawsuits (and others) is that AI companies may not necessarily be immune from liability for either negligence or potentially products liability. It is well-established that companies can owe a duty of care to consumers and the public for the proximate and foreseeable harm that can result from the use of their "defective" products. With respect to chatbots, it can be contended that they have been designed with "addictive" elements which makes them "defective" because they are able to cause users to become isolated and disconnected from reality.

In circumstances where a chatbot causes a user to commit suicide, this can lead to a wrongful death lawsuit against the chatbot company.

Similarly, a chatbot company may be liable where the chatbot causes a user to harm third parties. In these circumstances, third parties can legitimately contend that "but for" the encouragement of the chatbot, the third parties would not have been harmed by the user.

As a research lawyer, I personally remain somewhat fearful that the use of AI to conduct "legal research" or to draft legal memorandum or arguments will replace my job.

However, with the continued growing trend in the use of hallucinated cases by self-represented litigants and lawyers and the newest trend of wrongful death and personal injury claims against chatbot companies, I should not necessarily be fearful of AI. Instead, I should view the growing use of AI as a potential treasure trove of future legal work given the latest trend to sue AI companies for damages allegedly caused by the use of chatbots.

As one of my colleagues has repeatedly observed, the growing use of AI will result in more legal work rather than less. This will certainly be the case if AI companies can be held liable for the harms caused by their products. A PDF version is available to download here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More