ARTICLE
17 March 2026

Lawsuit Alleges AI Chatbot Engages In Unauthorized Practice Of Law

BT
Barnes & Thornburg LLP

Contributor

In a changing marketplace, Barnes & Thornburg stands ready at a moment’s notice, adapting with agility and precision to achieve your goals. As one of the 100 largest law firms in the United States, our 800 legal professionals in 23 offices put their collective experience to work so you can succeed.
Liability surrounding AI tools remains an open question, and litigants are testing the limits of such liability via novel legal theories.
United States Technology
Kaitlyn E. Stone’s articles from Barnes & Thornburg LLP are most popular:
  • within Technology topic(s)
Barnes & Thornburg LLP are most popular:
  • within Litigation and Mediation & Arbitration topic(s)

Highlights

  • Liability surrounding AI, like other forms of online services, remains an open question with many stakeholders.
  • After entering into a settlement agreement in a prior case, an AI user sought advice from an AI tool on how to void the provisions of that agreement.
  • The defendant in the prior lawsuit has sued OpenAI alleging that, through its ChatGPT tool, OpenAI tortiously interfered with the settlement agreement, abused the judicial process, and engaged in the unlicensed practice of law.

Liability surrounding AI tools remains an open question, and litigants are testing the limits of such liability via novel legal theories. As discussed in a recent alert, some litigants and legislators are trying to graft traditional product liability frameworks onto AI technologies.

Now, a newly filed lawsuit takes a different approach, seeking to expand the scope of cognizable claims against AI tools. In Nippon Life Insurance Company of America v. OpenAI Foundation, et al., 1:26-vb-2448 (N.D.Ind.), the plaintiff brings claims for tortious interference with contract, abuse of process, and unlicensed practice of law.

The Allegations in the Complaint

Nippon's claims involve the alleged breach of a settlement agreement with a policyholder in a prior lawsuit. In that lawsuit, a policyholder, Dela Torre, sued to recover certain long-term disability benefits that were allegedly denied. In January 2024, Dela Torre and Nippon resolved that dispute and entered into a settlement agreement releasing all claims.

Yet, after finalizing the settlement, Dela Torre became dissatisfied with the agreement and sought her attorneys' advice on rescinding the release. After her attorneys advised her that she was bound by the settlement agreement, Dela Torre sought a second opinion — this time turning to ChatGPT for advice. The AI tool allegedly produced outputs saying that Dela Torre was misled by her attorneys and that she could seek to undo the settlement.

As a result of this guidance from the AI tool, Dela Torre began using the tool to craft legal arguments, draft legal documents, and determine strategies for vacating the settlement agreement. In just under a year, Dela Torre has made 58 separate filings in a second lawsuit against Nippon related to the parties' settlement agreement. Some of these filings allegedly include citations to fictitious cases and other unfounded legal arguments.

The Causes of Action Alleged

Nippon's complaint asserts three causes of action against OpenAI related to Dela Torre's actions:

Tortious Interference with Contract

Nippon alleges a cause of action for tortious interference with contract. This claim is premised on the assertion that ChatGPT's outputs knowingly persuaded Dela Torre to seek to vacate the settlement agreement in the prior lawsuit. Due to the "advice" of ChatGPT, Nippon has been forced to expend resources to enforce the settlement agreement and to respond to the multitude of filings from Dela Torre.

Abuse of Process

Nippon asserts that the numerous filings from Dela Torre were meritless and constituted an abuse of process. Since ChatGPT aided in developing the strategy for, and assisted in the drafting of, these filings, ChatGPT facilitated Dela Torre's abuse of the judicial process.

Unlicensed Practice of Law

Last, Nippon alleges that ChatGPT engaged in the unlicensed practice of law by providing Dela Torre with outputs containing legal analysis and advice. The AI tool also assisted in drafting legal filings. Together, these actions allegedly constitute the practice of law. Yet, ChatGPT is not a lawyer, thus any practice of law would be unlicensed.

In addition to monetary relief, Nippon seeks to enjoin OpenAI form allowing its AI tools to provide legal assistance in the state of Illinois.

The Need to Monitor Developments

Until the legal framework surrounding AI tools is clarified, it is important to keep abreast of new developments in this field. The approaches to AI liability are varied with some calling for total prohibitions on liability while others seek to impose product liability or other liability frameworks on these tools.

Where courts and legislatures will ultimately land on these debates remains uncertain; however, staying up to date on new developments may assist in reacting to new trends in this area of law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More