- with readers working within the Banking & Credit and Technology industries
- with Senior Company Executives, HR and Inhouse Counsel
The fast pace of technological change and AI adoption presents a strategic opportunity to strengthen Canada’s financial system while simultaneously posing risks.
Based on insights from four thematic workshops convened between May and November 2025, the Office of the Superintendent of Financial Institutions (“OSFI”), in collaboration with the Global Risk Institute (“GRI”) and other industry stakeholders, has published a report titled: FIFAI II: AI Risks and Opportunities: Adopting an AGILE Framework in Canadian Financial Services (the “Report”)1 providing an outlook on the opportunities, risks, and emerging practices associated with artificial intelligence (“AI”) adoption across the Canadian financial services sector.
From EDGE to AGILE
As discussed in our earlier bulletin in detail, the first report of the Financial Industry Forum on Artificial Intelligence (“FIFAI”) focused on internal risks associated with the development, deployment, and use of AI within financial institutions and introduced the EDGE principles of Explainability, Data, Governance, and Ethics as the pillars for responsible AI adoption across the financial industry.
However, the Report finds that EDGE, while relevant and necessary, is no longer sufficient in light of the pace and scale of AI adoption. In particular, as AI use becomes more embedded and risk exposures grow, institutions require a more adaptive and operational approach to governance and risk management.
To address this, the Report introduces the AGILE framework, which is intended to guide institutions in managing AI risks while enabling responsible innovation and resilience.
AGILE comprises five interrelated elements:
- Awareness – Institutions are expected to maintain an informed and evolving understanding of AI use cases, risk exposures, and broader system implications, including emerging threats and macro-level impacts.
- Guardrails – Solidifying risk control frameworks, human oversight, data governance, including strong data integrity standards, and due diligence in third-party engagement, to ensure that AI-embedded systems operate safely, predictably, and fairly.
- Innovation – Institutions should pursue innovation in a deliberate and risk-aware manner and treat AI as a driver of competitiveness, rather than a replacement for human expertise.
- Learning – Building necessary human capital and AI literacy at every organizational level, including employees and management, and participating in initiatives that develop talent and consumer awareness, particularly in light of talent shortages and the complexity of AI systems.
- Ecosystem Resiliency – Fortifying the system through coordination and shared standards across the financial ecosystem, including regulators, institutions, and third-party providers, with a focus on strengthening system-wide resilience and response capabilities.
The Report emphasizes that financial institutions should operationalize AGILE through strengthened governance and oversight, adaptable risk management practices, and continued investment in talent and capabilities. It also highlights the importance of coordination across the financial sector, including engagement with regulators and collaboration on shared challenges such as emerging threats and operational resilience.
The Evolving AI Risk Environment
The Report confirms that AI-related risks for financial institutions are expanding in both scope and complexity, and identifies six broad categories of risk.
- Strategic and Governance Risks including the challenge of balancing timely AI adoption with effective risk controls. Institutions face increasing pressure to adopt AI while managing fragmented strategies, resource constraints, regulatory uncertainty, and data-related risks. The Report is clear that moving too quickly can lead to potential operational and consumer harms, while moving too slowly may result in missed opportunities and competitive disadvantages, such as potential disruption from technology firms.
- Security and Cybersecurity Risks have intensified, with AI enabling more sophisticated fraud and cyberattacks, including deepfakes, synthetic identity fraud, and automated bots. Indeed, a 2024 industry survey found that 91% of financial institutions globally are reconsidering voice-verification systems due to AI voice-cloning capabilities.
- Heightened Consumer Protection Risks, particularly as AI becomes embedded in decision-making processes such as credit adjudication, underwriting, and investment advice. Issues relating to transparency, explainability, bias, data security, and fraud exposure are expected to become more pronounced.
- Talent and Knowledge Gaps are identified as a key constraint on responsible AI adoption, with the pace of technological change outstripping institutional capacity for training and governance development.
- Increasing Third-Party and Supply Chain Risks, as growing AI dependencies span data, models, software components, and compute/cloud infrastructure. Additionally, AI adoption is deepening financial institutions’ dependence on a small number of third-party technology providers, which can heighten systemic fragility.
- Emerging Financial Stability Risks, including the potential for AI-driven market volatility, operational disruptions, and broader macroeconomic impacts from AI-driven labour and business disruption that could translate into rising credit risk for affected individuals and businesses.
Conclusion
The AGILE framework signals an expectation that AI strategies will be more integrated across governance, risk management, and business functions, and that effective risk management will require forward-looking approaches that keep pace with a rapidly changing environment.
The AGILE framework is likely to serve as a reference point for future discussions on AI governance and risk management in the Canadian financial services sector, particularly as institutions continue to scale AI adoption and system-wide considerations become more prominent. Institutions can expect regulators to provide greater clarity on how existing rules will apply to their AI practices, and should take proactive steps to operationalize the AGILE framework by investing in talent, infrastructure, and cross-sector collaboration.
Footnote
1 Office of the Superintendent of Financial Institutions, FIFAI II: AI Risks and Opportunities: Adopting an AGILE Framework in Canadian Financial Services (March 23, 2026).
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© McMillan LLP 2025
[View Source]