- within Privacy topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
- in United Kingdom
- with readers working within the Law Firm industries
On 8 January 2026, the ICO published its report on agentic AI, which sets out the ICO's understanding of the emerging technology and its potential capabilities, the novel data protection risks it presents, and its desire to encourage innovation within agentic AI that supports data protection rights.
This report follows the ICO's commitment in its AI and biometrics strategy, published in June 2025, to work with industry to explore the data protection implications of agentic AI.
Agentic AI
'Agentic AI' refers to systems that go beyond generative AI, by coupling language capabilities with tools, memory and adaptive decision‑making, so they can plan and act to complete open‑ended tasks with limited human direction. Whilst traditional software typically follows a fixed way to resolve problems, agentic AI may use its agentic capabilities to generate different ways to approach a problem.
The report notes the potential use cases for agentic AI across various industries as capabilities, and user comfort, increase. Agentic systems could improve functions in the workplace, government services, cybersecurity (where they could potentially be used to harm, as well as protect, secure system), and commerce; 'agentic commerce agents' are already available, but we could see agents potentially anticipating shopping needs, factoring in budgetary constraints, and then making proactive purchases.
Data protection concerns
The report highlights the importance the ICO places on taking a privacy-positive approach to the innovation of agentic AI, noting that "[i]n the context of data protection, AI agency does not mean the removal of human, and therefore organisational, responsibility for data processing."
The report goes on to highlight a cluster of risks that go beyond those seen in standard generative AI deployments (which the ICO has already considered in previous consultations):
- Responsibility and controllership: Determining controller/processor roles and accountability may be harder in the agentic AI supply chain, particularly in multi‑vendor agentic ecosystems.
- Scaled‑up automation and ADM: The rapid automation of increasingly complex tasks increases the likelihood of automated decisions that have legal or similarly significant effects, triggering stricter UK GDPR requirements.
- Purpose creep and data minimisation: Open‑ended agent tasks can tempt overly broad purposes and excessive data access; it is important to have a clear purpose for processing information used by an agentic AI system, and to have a justifiable reason for the agentic AI to process that information.
- Unintended processing of special category data: Agents may infer or process sensitive data incidentally, which heightens the risk profile and also means an Article 6 lawful basis and supplementary Article 9 condition will be required.
- Transparency and explainability: Complex information flows in agentic systems, which will not always be visible to humans (including agent-to-agent communication, or multi‑tool chains) make it difficult to be transparent with users, and complicate individuals' ability to exercise data subject rights.
- Accuracy: LLMs are vulnerable to hallucinations, and these hallucinations can become rapidly embedded in systems. As agentic AI systems make greater use of previously held information, inaccurate information held in a system's memory can have a significant impact, undermining accuracy at scale.
- Security: Like any other internet system, agentic systems are subject to attack by malicious third parties. The autonomy of agentic systems and their ability to perceive and learn from their environment presents novel opportunities for compromise (including the potential for large‑scale automated attacks).
Practical considerations for privacy, risk, and product teams
Although the report is not formal guidance, it nevertheless highlights a set of action points for organisations that are building or deploying agentic AI systems:
- Clarify the allocation of controller/processor roles across every layer of the agentic chain, and ensure contracts allocate responsibilities (including for security, rights handling, and breach management).
- Update DPIA templates to cover the increased autonomy of agentic systems, heightened access, special category inference, and any human involvement.
- Map where significant/legally‑significant automated decision‑making may take place, and provide notices, explanations, and meaningful human review.
- Address the heightened cybersecurity risks.
- Guard against purpose creep by establishing specific purposes for each processing activity, and avoid defining purposes too broadly purely because of the open-ended nature of agentic system capabilities. Organisations should consider the ICO's response to purpose limitation in the generative AI lifecycle (which formed part of the ICO's recent consultation series on generative AI).
- Enforce data minimisation by applying the principle of 'least privilege', i.e. only grant access to the data that is necessary for defined tasks and ensure your technical architecture enforces these controls.
- Implement technical measures to prevent agentic systems from accidentally using or inferring special category data, and ensure you have documented the Article 6 bases and Article 9 conditions on which any special category data is processed.
- Introduce mechanisms to verify information at any critical decision points, to prevent inaccurate information from being embedded into a system.
- Maintain transparency and explainability (see the ICO's additional guidance here: How do we ensure transparency in AI? | ICO)
The future of agentic AI
The ICO's report sets out four possible high-level scenarios for the adoption of agentic AI, all based on two key factors: the capability of AI agents, and the extent to which they are adopted:
- Low capability and low adoption: what the ICO calls 'scarce and simple agents'
- Low capability and high adoption: what the ICO calls 'just good enough to be everywhere'
- High capability and low adoption: what the ICO calls 'agents in waiting'
- High capability and high adoption: what the ICO calls 'ubiquitous agents'.
These four scenarios formed the basis of the ICO's analysis of privacy and data protection considerations as they might present in future developments, and the issues identified within these scenarios will help inform future policy thinking.
In the ICO's view, scenarios 2 and 4 are likely to be of the most importance to organisations; both involve high adoption of agentic AI, making them the most likely scenarios in which organisations will be deploying or encountering these systems in practice. Scenario 2 highlights the risks of agentic systems which don't necessarily work well, whilst scenario 4 highlights the risks of agentic systems that work as designed, but still raise significant privacy concerns.
What's next?
The ICO has explained that it plans to take a proactive approach to regulating agentic AI, including:
- inviting developers to use the ICO's Innovation Advice Service (a free service which can be used for advice on how to resolve data protection issues that are delaying the progress of new products)
- continuing to invite innovators to work with the ICO's Regulatory Sandbox.
The regulator also explicitly calls out opportunities where agentic AI could improve compliance outcomes if designed well, such as data‑protection‑compliant agents, agentic controls (monitoring, auditing, permissions, authentication), privacy management agents, and information governance agents, plus methods for benchmarking and evaluation.
Given that the ICO is also developing a statutory code on AI and ADM (with implications for agentic AI), and updating guidance on ADM and profiling (in light of the Data (Use and Access) Act), this report fits in with the ICO's general direction of travel, and reaffirms that strong data protection foundations are essential to developing successful systems and building public trust in agentic AI systems.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.