ARTICLE
9 December 2025

The Guideline On Generative Artificial Intelligence And The Protection Of Personal Data Has Been Published

MA
Moroglu Arseven

Contributor

“Moroglu Arseven is a full-service law firm, with broadly demonstrated expertise and experience in all aspects of business law. Established in 2000, the firm combines a new generation of experienced international business lawyers, who hold academic, judicial and practical experience in all aspects of private law.”
As part of the ongoing efforts to strengthen Türkiye's legal framework regarding the protection of personal data in relation to artificial intelligence systems...
Turkey Privacy
C. Hazal Baydar, LL.M.’s articles from Moroglu Arseven are most popular:
  • in Turkey
Moroglu Arseven are most popular:
  • within International Law and Compliance topic(s)

As part of the ongoing efforts to strengthen Türkiye's legal framework regarding the protection of personal data in relation to artificial intelligence systems, the Personal Data Protection Authority ("Authority") has published the Guideline on Generative Artificial Intelligence and the Protection of Personal Data ("Guideline") on 24 November 2025. The Guideline has been prepared within the scope of the Law on the Protection of Personal Data ("DP Law") and serves as an evaluative document.

Prepared with the aim of clarifying the impacts of generative artificial intelligence ("GenAI") technologies on personal data, the Guideline provides a systematic framework outlining the operating principles of GenAI systems, the risks they may pose, and how personal data processing activities carried out within these systems should be assessed.

The principal matters examined in the Guideline are concisely set out below:

  • The Guideline initially sets out certain fundamental concepts and definitions regarding artificial intelligence. In this context, notions such as open data, algorithmic decision-making systems, large language models, black-box structures, privacy-enhancing technologies, profiling, and synthetic data are addressed, drawing on definitions provided by the International Association of Privacy Professionals, the General Data Protection Regulation, the European Data Protection Supervisor, and the International Organization for Standardization.
  • GenAI is defined as a type of artificial intelligence trained on large-scale datasets and capable of producing original outputs such as text, images, video, audio or software code in response to user prompts. Unlike traditional artificial intelligence models that are designed for predefined tasks, GenAI can generate new and contextually relevant outputs by processing data patterns through deep learning and artificial neural networks. Due to this generative capability, GenAI has widespread use in areas such as content creation, summarization, question answering, code development and audiovisual production.
  • The lifecycle of GenAI models comprises defining the intended purpose and scope, collecting and preprocessing data, training and fine-tuning the model, performing accuracy and security evaluations, deploying the system, and continuously improving it through feedback. Each stage requires not only technical considerations but also the assessment of ethical, legal and societal implications to ensure that GenAI systems function safely and remain aligned with human-centered principles.
  • The Guideline notes that the application areas of GenAI span a wide spectrum, highlighting its active use in automated response generation for customer services, data analysis in healthcare, personalized learning content in education, targeted campaign design in marketing and advertising, code generation in software development, contextual result delivery in search engines, and document review processes in the legal field.
  • The Guideline outlines the associated risks, drawing attention to the significant implications of issues such as hallucinations and inconsistent outputs, bias and discrimination, the generation of inaccurate information, the unintended processing of personal data, intellectual property rights violations, and manipulation. These risks are emphasized as having major consequences for data security and societal impact.
  • It is emphasized that personal data within GenAI systems must be processed only for specific, explicit, and legitimate purposes, and that such processing must be relevant, limited, and proportionate to those purposes. In this context, vague statements such as "improving our database" are deemed insufficient.
  • The Guideline indicates that the distinction between data controllers and data processors in GenAI systems must be made based on the lifecycle of the system. Accordingly, the party that determines the purposes and means of personal data processing is considered the data controller, while the party carrying out the technical operations in line with these decisions is considered the data processor. It is also noted that roles and responsibilities may shift across different stages, such as the development, training, deployment, and use of GenAI systems. Further emphasis is placed on the need for a case-by-case assessment that considers the nature and context of each processing activity and the actual control mechanisms involved, regardless of the parties' contractual declarations.
  • The legal basis for personal data processing in GenAI processes must be determined separately for each processing step. In this regard, the condition of "necessity for the legitimate interests of the data controller" is noted as an invoked basis in GenAI applications; however, it does not grant unlimited authority. As emphasized in the Personal Data Protection Board's Decision dated 25 March 2019 and numbered 2019/78, legitimate interests must always be balanced against the fundamental rights and freedoms of the data subject. Accordingly, the collection and further processing of publicly available social media posts for the purpose of training a GenAI model, when such processing exceeds the data subject's reasonable expectations, cannot be justified on the grounds of legitimate interest. Such use fails the balancing test, as it creates a risk of reconstructing or re-profiling individuals based on their data.
  • The processing of special categories of personal data within GenAI systems may lead to outcomes that could cause harm or discrimination due to the sensitive nature of such data. In this context, the risks arising from the processing of this data must be carefully assessed and appropriate technical and administrative measures must be implemented to minimize such risks.
  • It is emphasized that cross-border data transfers hold particular importance in GenAI processes due to the large-scale data requirements involved, and such transfers must be carried out in compliance with Article 9 of the DP Law and the provisions of the Regulation on the Procedures and Principles Regarding the Transfer of Personal Data Abroad. The Guideline on the Transfer of Personal Data Abroad, published by the Authority, is also highlighted as a key reference document for ensuring compliance during these processes. (You may access our detailed article on the Guideline on Cross-Border Data Transfers  here.)
  • Transparency is framed as a fundamental obligation in GenAI systems. In this context, privacy notices and policies must be clearly accessible within system interfaces; the categories of data processed during training, development, and user interaction phases must be explicitly disclosed; and data subjects must be able to clearly understand that they are interacting with an artificial intelligence system.
  • The data subject rights under Article 11 of the DP Law fully apply to GenAI systems. Consequently, technical and administrative mechanisms that provide transparency, traceability and accountability must be embedded from the outset. In particular, where automated decision-making is involved, systems must function in an explainable, auditable and human-intervenable manner to ensure that data subjects can effectively exercise their right to object.
  • Data security in GenAI systems is presented as a distinct and enhanced obligation that goes beyond conventional information security measures, requiring a proactive and lifecycle-oriented approach to address GenAI-specific threats such as model reverse engineering, prompt injection, and jailbreak attempts. In this regard, the implementation of privacy by design and privacy by default principles, conducting data protection impact assessments, the use of privacy-enhancing technologies, establishing comprehensive logging and traceability mechanisms, and dynamically managing associated risks are identified as essential components of secure GenAI operations.
  • In the everyday use of GenAI applications, users are advised to carefully assess the nature of the information they share and to avoid providing sensitive content, particularly identity, health, and financial data. In this context, reviewing privacy notices and if any policies, sharing only the information necessary for the intended purpose, effectively managing privacy settings, and refraining from disclosing information belonging to third parties are highlighted as key responsibilities.
  • Regarding children's interactions with GenAI systems, age-appropriate content controls, awareness of manipulative deepfake content, and parental guidance are deemed essential. Otherwise, it is noted that such technologies may have lasting negative effects on children's safety, privacy, and cognitive development.

You can access the full text of the Guideline through this link

Footnote

1. Application submitted to the Authority requesting the use of personal data processed for the fulfillment of the data controller's legal obligation within the scope of legitimate interest" Summary of the Personal Data Protection Board Decision dated 25/03/2019 and numbered 2019/78 (https://www.kvkk.gov.tr/Icerik/5434/2019-78)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More