ARTICLE
26 March 2026

Navigating 2026: Key Trends Shaping The Technology Sector

F
Fasken

Contributor

Fasken is a leading international law firm with more than 700 lawyers and 10 offices on four continents. Clients rely on us for practical, innovative and cost-effective legal services. We solve the most complex business and litigation challenges, providing exceptional value and putting clients at the centre of all we do. For additional information, please visit the Firm’s website at fasken.com.
The key factor that distinguishes agentic AI from other iterations of artificial intelligence is that it is a system that can accomplish a specific goal autonomously – i.e., understanding, planning...
Canada Technology
Fasken are most popular:
  • within Insurance, Criminal Law and Law Department Performance topic(s)
  • with Senior Company Executives, HR and Inhouse Counsel
  • in European Union
  • with readers working within the Insurance, Healthcare and Oil & Gas industries

Trend #1: Agentic AI Will Continue to Expand the Frontier of Automation, Prompting Organizations to Balance Innovation with Rigorous Legal, Governance, and Operational Controls

The key factor that distinguishes agentic AI from other iterations of artificial intelligence is that it is a system that can accomplish a specific goal autonomously – i.e., understanding, planning, and executing tasks all without human intervention. By way of comparison, generative AI (one of the trending forms of AI in the past few years) focuses on creating content based on learned patterns and agentic AI goes a step further by taking this content and completing an action.1 For example, agentic AI can tell you the best time to fly for your trip and help you build an itinerary, but also book your flight and accommodations on your behalf.

EMERGING ADOPTION

The Agentic Enterprise Salesforce Index2, which surveyed AI usage from a cohort of businesses using Salesforce’s platform data found that the top three most popular use cases for AI agents are customer service, internal business automation, and sales. In sales, drafting and sending emails are the top agent actions, followed by developing to‑dos and scheduling meetings. Another report from November 2025 by McKinsey3 on the state of AI found that 23% of respondents are scaling an agentic AI system somewhere in their enterprises (that is, expanding the deployment and adoption of the technology within at least one business function), and an additional 39% say they have begun experimenting with AI agents. Use of AI agents is most commonly reported in IT and knowledge management (e.g., service‑desk management in IT and deep research in knowledge management).

The key takeaway is that use of AI agents is not widespread yet. Agents are being used for specific functions and tasks, and in any given business function, no more than 10% of respondents say their organizations are scaling AI agents. The findings suggest that organizations are largely curious about agentic AI and are still experimenting with it (approximately 62% of respondents).4

Gartner predicts that organizations are in a crucial three‑ to six‑month window to define agentic AI product strategies. The AI industry has reached an adoption inflection point, with Gartner projecting that 40% of enterprise applications will include integrated task‑specific agents by 2026 (up from 5% in August 2025).5

SETTING CLEAR BOUNDARIES AROUND HOW AND WHERE TO DEPLOY AGENTIC AI

Organizations should have a clear understanding of the capabilities of the agentic AI systems they engage with and clearly define boundaries for what they will allow agentic AI to do autonomously and which other tasks still require human involvement and real‑time oversight. Low‑risk, repetitive, and easily automated tasks are appropriate for agentic AI,6 whereas sensitive, complicated or high‑risk applications warrant greater caution.

A Note on Marketing Liability

A related challenge arises when organizations, or the vendors they rely on, overstate the true level of autonomy that their AI systems can provide. Whether developing AI products marketed as “agentic” or procuring tools that claim to be agentic, organizations must assess how autonomous the product truly is before launching or purchasing it. 

While the Federal Trade Commission (FTC) in the US has been cracking down on deceptive AI claims generally, recent FTC activities reveal that “AI washing” is trending upward. In August 2025, the FTC sued Air AI (a company selling business coaching and support services) for making a number of misleading claims about its AI product, including that it can operate autonomously and “replace human customer service representatives and, in combination with other services, make business owners significant sums of money”.7 In reality, the FTC alleged that the AI technology frequently failed to perform basic functions like placing outbound calls, scheduling appointments, recording email addresses, or responding accurately to questions.8

UNIQUE AI GOVERNANCE CHALLENGES

Beyond the typical governance issues that apply to AI adoption generally, agentic AI raises some unique concerns that organizations should be aware of.

1. Managing Liability

Organizations will need to shift their focus from erroneous content to improper actions. Traditional law of agency – where another human or corporation acts as an agent on behalf of an organization – generally requires that the principal is responsible for the actions of its agents. In the case of agentic AI, organizations should assume that similar liability applies by default. However, when contracting with vendors, organizations should consider how this liability can be shared with the developers and/or distributors of agentic AI to reduce liability. For example, organizations may push back on warranties that the agent is made available on an “as is” basis and may require certain warranties or service levels covering accuracy, training and availability. That said, at this stage, there is a general hesitancy in the market for vendors to make such commitments, and this may be difficult to negotiate where the vendor has greater bargaining power. Additionally, a customer should consider requiring vendors to provide an indemnity that covers any liability that may arise from improper actions taken by the agentic AI system.

2. Explainability

AI regulations throughout the world, both in force and proposed (including Canada’s now‑defunct Artificial Intelligence and Data Act) have consistently sought to address the ethical principles of explainability and transparency. Despite the absence of a comprehensive AI regulation in Canada to date, organizations should expect that some level of explainability will be required, especially where AI agents are used in consumer‑facing applications or to make decisions about individuals. 

It will be prudent to establish robust audit trails and activity logging for every decision made and action taken. Organizations will also need to demonstrate their ability to evaluate and verify each step of the workflow and ensure that monitoring and evaluation capabilities are built‑in to detect errors, improve performance, and provide visibility into how the system operates.

3. Human Oversight

As agentic AI may rely on generative AI models, it is likely to be susceptible to the same types of hallucinations and other issues encountered when using generative AI in non‑agentic applications. While agentic AI is by design intended to be autonomous, organizations should still consider implementing some degree of direct human oversight to minimize the impact of errors that may arise. This will especially be the case as organizations explore more complex use cases for agentic AI. 

One question organizations should ask agentic AI vendors is whether they provide a managed service to address the need for human oversight. The vendor might be better equipped to readily understand certain logged data and act more efficiently in troubleshooting.

4. Testing

Given the ability of agentic AI to act autonomously, organizations should carefully consider testing standards and ensuring that AI agents are rigorously stress‑tested in sandbox environments,9 with sufficiently large sample sizes. This testing process, especially continuous testing during operation, is one way to maintain human oversight. 

In addition to the obligations around accuracy discussed above, organizations should require an ongoing representation and warranty that the agent will continue to be accurate and perform in accordance with the agreement (including the technical specifications and the user documentation), even after updates are made, use cases change, or datasets are updated. Ongoing testing will be a key evaluation tool to ensure continuous accuracy.

Agentic AI represents the next major evolution in enterprise automation, one that shifts organizations from passively generating information to actively executing tasks with minimal human involvement. As organizations explore these opportunities, they must balance innovation with careful governance. Clear capability boundaries, responsible marketing practices and robust oversight frameworks will be critical to managing the heightened legal, operational, and ethical risks introduced by autonomous action. Ultimately, organizations that invest early in thoughtful testing, explainability measures, and shared liability models will be best positioned to leverage agentic AI safely and strategically.

To view the full article, click here.

Footnotes

1. See more information regarding the capabilities of agentic AI and the differences between agentic AI and generative AI here: IBM ‑ What is agentic AI? and IBM ‑ AI agents in 2025: Expectations vs. reality.

2. Full index: Salesforce ‑ The Agentic Enterprise Index.

3. Full report: McKinsey ‑ The state of AI in 2025: Agents, innovation, and transformation.

4. Ibid.

5. Full press release: Gartner Predicts 40% of Enterprise Apps Will Feature Task‑Specific AI Agents by 2026, Up from Less Than 5% in 2025.

6. IBM ‑ AI agents in 2025: Expectations vs. reality.

7. Full press release: FTC Sues to Stop Air AI from Using Deceptive Claims about Business Growth, Earnings Potential, and Refund Guarantees to Bilk Millions from Small Businesses.

8. Full FTC claim: Federal Trade Commission v. Air Ai Technologies Incorporated (2:25‑cv‑03068) at para 51.

9. IBM ‑ AI agents in 2025: Expectations vs. reality.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More