ARTICLE
14 April 2026

From Algorithms To Awards: Artificial Intelligence In The Arbitration Toolkit

BC
Blake, Cassels & Graydon LLP

Contributor

Blake, Cassels & Graydon LLP (Blakes) is one of Canada's top business law firms, serving a diverse national and international client base. Our integrated office network provides clients with access to the Firm's full spectrum of capabilities in virtually every area of business law.
As generative AI tools become increasingly integrated into arbitration practice, questions arise about their appropriate use by arbitrators and decision-makers. While institutions like the Chartered Institute of Arbitrators and the American Arbitration Association have begun developing guidelines, significant gaps remain in establishing clear boundaries between efficiency gains and the fundamental obligations of procedural fairness.
United States Litigation, Mediation & Arbitration
Blake, Cassels & Graydon LLP are most popular:
  • with Senior Company Executives, HR and Inhouse Counsel
  • with readers working within the Accounting & Consultancy and Insurance industries

Lawyers’ reliance on generative AI tools (GenAI) for substantive legal work has been at the forefront of legal news in the last few years — often for the wrong reasons. Used properly, however, GenAI can make substantive legal work more efficient. Usually, discussions about the use of GenAI for legal work focus on its use by parties or their legal counsel, but an emerging aspect of this technology is its use by judges and arbitrators. 

Developing Guides and GenAI Use Cases for Arbitrators

While arbitration bodies have started developing guidelines for the use of GenAI by arbitrators, there is still a large gap between the obligations expressly directed at the parties and the thin guidance offered to decision-makers. Many of the guiding principles directed at decision-makers are subsumed into the overarching obligation of procedural fairness generally, coupled with the obligation for arbitrators to make their own decisions based only on material from the parties. 

The Chartered Institute of Arbitrators (CIARB) has issued a Guideline on the Use of AI in Arbitration, which includes specific guidance for arbitrators. Although the use of GenAI is discretionary, the Guideline provides that arbitrators are responsible for all aspects of an award, regardless of any GenAI assistance with the decision-making process. The Guideline also emphasizes that arbitrators should not relinquish their decision-making powers. Specifically, arbitrators are expected to avoid delegating legal analysis, research, interpretation of fact and law, or application of the law to any GenAI tool. The Guideline is fully discussed in our previous bulletin, CIArb Issues Its First Guidelines for the Use of AI in Arbitration

Embracing the rapid development and advantages of GenAI, the American Arbitration Association (AAA) recently introduced its AI Arbitrator that has been trained specifically on AAA construction awards. It parses the parties’ claims, analyzes the evidence, applies the law and drafts a proposed award. However, the AAA keeps a human in the loop: a human arbitrator reviews and revises the award as necessary before issuing it. 

The AAA emphasizes that every decision made by the AI Arbitrator ultimately involves a human arbitrator and that “human legal judgment remains central at every step.” The AI Arbitrator is designed to “empower, not replace” human arbitrators. In a similar vein, in March 2026, the AAA launched the Resolution Simulator, a tool that generates non-binding decisions modelling how a neutral arbitrator might analyze claims, counterclaims and legal arguments. The purpose of this tool is to assist parties in assessing the strengths and weaknesses of their cases, not issuing a binding decision. 

GenAI can increase efficiency and reduce costs. However, adoption of this tool needs to be done in both a substantively and procedurally fair manner. While some institutions like AAA and the Silicon Valley Arbitration & Mediation Centre already offer guidelines to arbitrators, not all the major institutions have done so — although it is likely that they will all follow suit.

Challenges Arising from the Use of GenAI 

A petition filed at the U.S. District Court for the Southern District of California (LaPaglia v. Valve Corp.) highlights the risks when decision-makers use GenAI. In LaPaglia, the petitioner sought to set aside an arbitrator’s final award, in part because the arbitrator purportedly relied on GenAI to draft his decision. Among other grounds, Mr. LaPaglia argued that by outsourcing decision-making to GenAI, the arbitrator exceeded his jurisdiction: “[j]ust as courts have vacated awards when the decision-making is outsourced to a person other than the arbitrator appointed, so too must a court vacate when that decision-making is outsourced to an AI.” The Court dismissed the petition on jurisdictional grounds, leaving untested the true implications of the alleged conduct of the arbitrator and his reliance on GenAI. 

While not in the arbitral context, in United States of America v. Bradley Heppner, the Southern District of New York held that records of a party’s communications with a GenAI tool that “outlined what he might argue with respect to the facts and the law” were not protected by attorney-client privilege. Given the express terms of the GenAI tool’s privacy policy, the Court held that the defendant had no reasonable expectation of privacy over the information provided and — most importantly — the GenAI was not a lawyer, so there was no attorney-client relationship. Ultimately, all of his “conversations” with the AI tool were held to be producible. This is further explored in our recent bulletin, AI and Legal Privilege: Practical Considerations From Emerging Case Law.

Until institutional rules expressly delineate the acceptable use of GenAI, parties are strongly advised to set out their expectations when drafting the terms of the arbitration, or Procedural Order 1. Particularly in arbitration, where parties are usually bound by strict confidentiality obligations, serious issues can arise from the unfettered use of GenAI tools by both parties and their chosen decision-maker. If the parties’ rules are set out early in the arbitration, it will limit the challenges that could arise further down the road.

Conclusion

As GenAI tools continue to evolve and become more embedded in arbitration practice, arbitrators, counsel and institutions alike will need to continue establishing clear guidelines and best practices to ensure that efficiency gains do not come at the expense of fairness, confidentiality or the integrity of the arbitral process.

For permission to reprint articles, please contact the bulletin@blakes.com Marketing Department.

© 2025 Blake, Cassels & Graydon LLP.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More