- within Technology and Tax topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
- with readers working within the Accounting & Consultancy, Banking & Credit and Insurance industries
Canada's AI regulatory landscape for financial institutions is still taking shape. Without an overarching federal statute, the financial services industry must navigate a patchwork of guidance and regulation from privacy and securities regulators. As organizations move from pilot projects to deploying AI in real-world operations, they also face heightened exposure to securities litigation and privacy class actions. Dentons' Litigation and Dispute Resolution and Privacy and Cybersecurity groups recently hosted a webinar in November discussing these challenges. The following are the key takeaways of the regulatory overview and litigation risks for financial institutions adopting AI.
Adoption of AI in the financial services industry
According to a poll taken during the Dentons webinar, 63% of respondents are using AI only in experimental or pilot projects, and 35% have embedded AI in operational processes under human oversight. A smaller group of 5% is already deploying proprietary AI tools for decision-making in areas such as credit scoring and portfolio recommendations, which could carry significant risk if outputs or services are misrepresented, inaccurate or biased. As adoption shifts from pilots to higher-impact decision-making, regulatory compliance and litigation risks should be considered by financial services organizations beginning to rely on AI tools in their operations.
Poll question (from the webinar):
1. How is your organization currently using AI today?
- 63% : Experimental / pilot use — small projects that are not yet embedded in operations and not impacting real decisions
- 35% : Operational AI embedded in processes — AI actively supports processes used daily, but humans make all final decisions (e.g., customer service chatbots, document processing, fraud monitoring alerts, analytics dashboards)
- 5% : Proprietary AI tools in decision-making — AI directly influences or makes decisions (e.g., portfolio recommendations, credit scoring, underwriting, AML alerts)
- 8% : AI use in marketing and sales — customer segmentation, lead scoring, campaign optimization
- 20% : Ad-hoc or unofficial AI use by employees
- 10% : Not using AI yet
Canada's AI regulatory landscape
Canada currently operates in a regulatory vacuum, lacking a federal framework comparable to the EU AI Act. The Artificial Intelligence and Data Act (AIDA), introduced in 2022 under Bill C-27, aimed to regulate high-impact AI systems through requirements for risk assessment, governance, incident reporting and enforcement. However, AIDA ultimately failed to advance, leaving organizations to rely on sector-specific guidance from regulators.
OSFI Guideline E-23 on model risk management
In the absence of legislation, sector-specific guidance has provided industry best practices for AI use. The OSFI Guideline E-23 on Model Risk Management, which takes effect in 2027, is a principle-based framework applying to federally regulated financial institutions. The Guideline requires organizations to understand and manage AI model risks, assign risk ratings based on quantitative and qualitative factors and implement governance across the model lifecycle. This process involves assigning clear roles and responsibilities prior to the implementation of the AI model, understanding the limitations, as well as assigning and managing the specific risks. Ultimately, organizations remain accountable for assessing how AI models impact their risk profile before deployment.
CSA Notice and Consultation 11-348
In December 2024, the CSA issued Notice and Consultation 11-348 to clarify how securities laws can apply to AI in capital markets. The guidance emphasizes governance and oversight, requiring market participants to adopt policies for AI-specific risks and maintain "human-in-the-loop" controls. Explainability is key, as market participants must articulate how AI outputs are generated and what factors influence decisions to ensure accountability. Disclosure obligations warn against misleading statements or "AI washing" to attract investors. Market participants must also prevent conflicted decisions from biased data or flawed code. In regards to advisors and dealers, firms must oversee AI-driven recommendations and provide clear client disclosures. Investment fund managers must describe AI use in offering documents and avoid exaggerated claims. Non-investment fund reporting issuers should disclose AI's operational impact and identify forward-looking statements with assumptions and risk factors. The Notice remains in a consultation stage and serves as guidance, highlighting heightened regulatory expectations regarding the use of AI by market participants.
Québec and OPC guidance
Privacy law is also relevant to the financial services industry when AI models are used in the collection, use and disclosure of personal information. Québec's Act respecting the protection of personal information in the private sector requires disclosure when decisions are made exclusively through automated processing (such as AI) and grants individual rights specific to it: the individual concerned is entitled to be informed about the personal information used to render the decision as well as of the reasons, principal factors and parameters that led to the decision. The individual may also obtain correction of that personal information and must be given the opportunity to submit observations regarding the decision to someone in a position to review it.
The federal OPC's "Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies" published December 7, 2023, similarly emphasizes transparency and accountability. Organizations should clearly inform individuals when AI is used in the collection, use and disclosure of personal information, explain how decisions were reached and provide access or correction right to personal information processed by AI models. Together, these frameworks underscore growing expectations regarding explainability, fairness and individual rights when personal data is processed by AI.
AI litigation risks
Litigation risks related to AI are emerging as a significant concern, particularly in cases of "AI washing," where companies misrepresent their use of AI to gain a market advantage. This trend mirrors greenwashing in the ESG space and has already led to preliminary lawsuits and SEC investigations in the United States, offering lessons for Canadian organizations. Key risks include:
- Regulatory investigations and litigation on AI-washing: Securities regulators may impose penalties for exaggerated or false claims about AI capabilities. Misleading statements about AI-driven advantages or proprietary technology in public disclosures can also be deemed material and lead to investor lawsuits.
- Privacy class actions: Using AI to process personal information without proper disclosure or consent creates risks for privacy and consent-related class actions.
Overstating AI capabilities, failing to disclose or obtain consent for AI-driven processing of personal information and neglecting to monitor AI systems for bias expose financial services organizations and professionals to significant litigation and regulatory risk. Transparency and accuracy in public statements, along with compliance with privacy obligations will be critical to mitigating these risks.
Thank you to Nada Farag, articling student, for her contributions to this insight.
About Dentons
Dentons is the world's first polycentric global law firm. A top 20 firm on the Acritas 2015 Global Elite Brand Index, the Firm is committed to challenging the status quo in delivering consistent and uncompromising quality and value in new and inventive ways. Driven to provide clients a competitive edge, and connected to the communities where its clients want to do business, Dentons knows that understanding local cultures is crucial to successfully completing a deal, resolving a dispute or solving a business challenge. Now the world's largest law firm, Dentons' global team builds agile, tailored solutions to meet the local, national and global needs of private and public clients of any size in more than 125 locations serving 50-plus countries. www.dentons.com
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances. Specific Questions relating to this article should be addressed directly to the author.