- within Antitrust/Competition Law topic(s)
- in European Union
The Personal Data Protection Authority announced, through a public notice published on its official website on 12 March 2026, that it has issued the Agentic AI Guideline (the “Guideline”). The Guideline includes explanations and examples regarding agentic artificial intelligence (“AI”) systems and the AI agents used within such systems (AI agents), assessments on the general risks arising from the use of agentic AI, as well as risks that may occur within the scope of the Law No. 6698 on the Protection of Personal Data (“DPL”), and evaluations on matters to be considered with respect to the protection of personal data in agentic AI systems.
Agentic AI and AI Agents
Agentic AI and Traditional AI
The Guideline states that agentic AI systems are structures capable of conducting action processes without the need for continuous human guidance and that they focus on achieving predefined objectives rather than merely executing specific tasks. In these systems, tasks can be defined, planning can be performed, and actions can be coordinated in accordance with changing conditions. Furthermore, the Guideline explains that agentic AI systems differ from traditional AI systems, which operate based on predefined data and rules, in that they possess a higher degree of autonomy, goal-orientation, and capacity to interact with their environment.
AI Agents and Multi-Agent Systems
According to the Guideline, agentic AI systems perform decision-making and action processes through software components referred to as AI agents. AI agents are defined as automated software capable of perceiving their environment, responding to it, and performing actions in line with designated objectives.Agentic AI systems, on the other hand, are described as higher-level structures that assign tasks and authorities to such agents and manage decision-making and action processes within a broader system architecture.
Where multiple objectives or a broad scope of tasks is involved, it is possible to use multi-agent systems.Within such systems, different agents may execute complex processes efficiently and flexibly through task allocation and coordination.
Potential Use Cases of Agentic AI Systems
The Guideline provides various examples of situations in which agentic AI systems may be used in practice. Certain examples are summarized in the table below:
|
Sector and Relevant Business Process |
Example of Use |
|
Customer Support Processes |
Agentic AI systems may be used to interact with customers and manage support processes in line with defined objectives (for example, initiating automated actions or performing evaluations based on predefined criteria). |
|
Finance and Investment Processes |
Agentic AI may be used for the continuous monitoring of market data and for conducting assessments in light of changing conditions. Such systems may provide decision support within the scope of portfolio management, risk analysis, and compliance processes, and may contribute to the detection of unusual transaction patterns. |
|
Incident Response and Management Processes |
In processes such as security breaches, agentic AI may be used to assess the scope of the incident, carry out response actions, and implement the necessary measures to prevent the recurrence of similar incidents. |
Risks Related to the Use of Agentic AI Systems
The Guideline states that, due to the goal-oriented, multi-step, and varying levels of autonomy inherent in agentic AI systems, the way risks arise becomes more complex. The principal risks include the following:
-
Increased level of autonomy enables systems to initiate actions without human intervention, making system behavior more difficult to predict.
-
Lack of transparency and explainability, particularly in multi-step and multi-agent structures, makes it difficult to monitor decision-making processes, may cause errors to propagate in a chain, and complicates the determination of responsibility.
-
Bias and discrimination risks may arise when processing large datasets.
-
Deficiencies in technical design and system architecture may lead to security and reliability issues, and multi-step data processing may create additional risks for the confidentiality and security of personal data.
Risks That May Arise Within the Scope of Personal Data Protection
-
Purpose limitation and data minimization principle: When data processing activities in multi-agent systems are assessed collectively, they may produce broader outcomes. In addition, as systems may process new data over time, datasets that were not initially foreseen may be used; therefore, data processing must be limited to what is necessary for the intended purpose.
-
Change in the scope of data processing and preservation of the legal basis: If systems begin to process new sets of personal data during operation or use existing data for different purposes, data processing activities may deviate from the purposes initially determined and may rely on different legal bases.
-
Sensitive data, inference, and profiling risks: Agentic AI systems are capable of analyzing data and generating new inferences as a result of such analysis. In this context, information that does not constitute personal data when assessed in isolation may become personal data within the meaning of the DPL when combined with other data.
-
Transparency, explainability, and traceability: It may become difficult to determine which data is processed for which purpose, creating risks also in terms of the data controller’s obligation to inform data subjects.
-
Responsibility and accountability: Since data processing activities in agentic AI systems are often shared among developers, users, and other actors, determining responsibility in cases of violations or unlawful outcomes may become difficult.
-
Security, data confidentiality, and system resilience: As agentic AI systems operate in an integrated manner with multiple data sources and digital systems, manipulation of inputs within such systems may give rise to security risks.
-
Autonomy and reduced human oversight: Limitation of human supervision may lead systems to choose unforeseen data-processing paths where instructions are not sufficiently clear, thereby increasing risks related to personal data.
Matters Recommended in the Guideline to Be Considered with Respect to the Protection of Personal Data
Based on the risks summarized above, the Guideline sets out the following recommendations:
-
Establishing human oversight mechanisms and ensuring an appropriate balance between autonomy and control.
-
Ensuring explainability and traceability in agentic AI systems with multi-layered and distributed architectures.
-
Implementing technical restrictions, control mechanisms, and behavioral monitoring tools to prevent systems from operating beyond the defined purposes and limits.
-
Ensuring the accuracy, timeliness, and contextual relevance of the data used by the systems and preventing errors arising from input data.
-
Clearly defining the roles, authorities, and responsibilities of all actors involved in the development and use of agentic AI systems.
-
Systematically assessing risks throughout the lifecycle of agentic AI systems and conducting a data protection impact assessment where necessary.
-
Updating not only technical safeguards but also governance mechanisms, determining the limits of use, and carrying out training and awareness activities for people involved in the system.
Conclusion
In this context, it is important for companies in Türkiye that develop AI systems and process personal data to complete their DPL compliance processes without delay. With the publication of the Guideline, compliance expectations have become more visible, particularly with respect to data minimization, purpose limitation, preservation of the legal basis, explainability, human oversight, security measures, and accountability. Accordingly, companies are required to review their AI-based data processing activities from a risk-based perspective and to update their technical and organizational compliance mechanisms where necessary.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]