ARTICLE
11 March 2026

Evaluation Of The Use Of Generative Artificial Intelligence Tools In Workplaces Within The Scope Of The Law On The Protection Of Personal Data

SO
Sakar Law Office

Contributor

Sakar is a client and solution oriented, investigative and innovative law firm based in Istanbul. Our Firm is committed to provide our clients with high-quality legal services and business-minded approach. We are a full service law firm to clients across a wide range of areas including Mergers and Acquisitions, Corporate and Commercial, Contracts, Banking and Finance, Competition, Litigation, Employment, Real Estate, Energy, Capital Markets, Foundations, E-commerce, Media and Technology, Data Privacy and Data Protection and Intellectual Property. In order to offer the best possible service for our clients, we harness the latest market developments in legal technology and innovation and we closely follow the legislative changes in Turkish Law. Our lawyers are multi-specialists, equipped to handle a broad range of legal matters. In addition to our depth of experience and awareness of market practice, clients know they will benefit from our team’s innovative mindset and willingness.
The Personal Data Protection Board ("Board") published the document titled "Use of Generative Artificial Intelligence Tools in Workplaces" ("Document") on its official website on 5 March 2026.
Turkey Technology
Sakar Law Office are most popular:
  • within Energy and Natural Resources and Criminal Law topic(s)

The Personal Data Protection Board ("Board") published the document titled "Use of Generative Artificial Intelligence Tools in Workplaces" ("Document") on its official website on 5 March 2026. The Document aims to provide a general framework regarding the use of generative artificial intelligence tools offered by third parties and publicly accessible in workplaces. In this article, the recent development of generative artificial intelligence systems and their relationship with the processing of personal data are examined.

As is widely known, generative artificial intelligence systems have become one of the most significant technological developments in recent years. These technologies are increasingly used in business processes and are preferred by employees due to the convenience they provide in accessing information and producing content. Generative AI tools enable the creation of various types of content, including text, images, audio, and software code. Through this Document, the Board aims to raise awareness among companies, institutions, and organizations by providing a general overview of the use of generative AI tools in workplaces, highlighting the risks associated with uncontrolled use, and outlining the key considerations that should be taken into account during their use. The Document seeks to draw attention to potential risks and encourage the responsible and conscious use of such technologies.

  • Overview of the Use of Generative Artificial Intelligence Tools in Workplaces

Generative artificial intelligence ("GenAI") is explicitly defined in the Document as artificial intelligence ("AI") systems that are trained on large-scale datasets and are capable of generating content in different formats such as text, images, videos, audio, or software code in response to prompts or commands provided by users.

The widespread adoption and increasing accessibility of GenAI tools significantly facilitate employees' use of these tools within business processes. Indeed, generative AI tools are not limited to a particular sector or professional group; rather, they can be used as supportive tools in activities carried out across various fields of expertise. Today, these tools can be utilized for a wide range of purposes, including drafting emails and texts, summarizing documents, analyzing content, supporting idea development processes, generating meeting notes, and accelerating research activities. The speed and efficiency advantages offered by generative AI tools create a strong incentive for many organizations to integrate these tools into different stages of their business processes.

  • Shadow Artificial Intelligence and Shadow IT

One of the concepts highlighted in the Document published by the Board is Shadow Artificial Intelligence (Shadow AI). Shadow AI refers to the use of generative artificial intelligence tools by employees within business processes without the knowledge, approval, or institutional control mechanisms of the organization. This form of use, which often arises through individual initiative, may lead to applications that cannot be sufficiently monitored by the organization's information technology infrastructure and governance mechanisms. Today, the use of shadow AI has moved beyond being merely a theoretical risk for many organizations and has become a phenomenon encountered within everyday business workflows. For instance, employees sharing meeting notes, internal correspondence, draft reports, or various types of information relating to customers and employees with generative AI tools may increase the risk of data disclosure outside the organization's control. This phenomenon also resembles Shadow IT practices, which organizations have long faced. Shadow IT refers to situations where employees use digital tools that have not been approved or monitored by the organization for business purposes. Similarly, the uncontrolled use of generative AI tools creates a new area that may complicate organizations' risk management processes.

  • Risks Associated with the Use of Shadow Artificial Intelligence

The Document also draws attention to the risks that may arise from the use of shadow artificial intelligence.

  • Auditability: The use of generative artificial intelligence tools that are not subject to institutional recording and audit mechanisms may make it difficult to subsequently determine which data were used for which purposes and on what basis the resulting outputs were generated.
  • Quality and Accuracy of Decisions: Artificial intelligence outputs that do not pass through institutional verification processes may produce inaccurate, misleading, or inconsistent results, which may in turn lead to incorrect assessments within business processes.
  • Protection of Intellectual Property and Trade Secrets: The sharing of source codes, product designs, business strategies, or competitively sensitive information with external generative artificial intelligence tools may create the risk of losing control over such information or making it accessible to third parties.
  • Loss of Reputation and Trust: The use of artificial intelligence outputs whose accuracy has not been verified may undermine the credibility of the organization in the eyes of its stakeholders through the dissemination of incorrect or unreliable content.
  • Information Security and Cybersecurity: The use of generative artificial intelligence tools outside institutional control may expose organizations to cyberattacks through insecure APIs, personal devices, or uncontrolled integrations.
  • Protection of Personal Data: Significant risks also arise in terms of the protection of personal data. The sharing of personal data with generative artificial intelligence tools used outside institutional control may increase the risk of data breaches. Such practices may lead to the unlawful processing of personal data, unauthorized access by third parties, or the use of such data for purposes other than those for which they were collected. Furthermore, the reflection of personal data or institutionally sensitive information shared by users through prompts in the generated outputs, and the potential accessibility of such outputs by third parties, is also considered a major security concern.
  • Considerations for the Use of Generative Artificial Intelligence Tools

In this context, the increasingly widespread use of generative artificial intelligence tools in business processes necessitates that organizations review their institutional policies and practices regarding these tools. Indeed, it is considered difficult for policies based solely on prohibitive approaches to produce effective results in practice. For this reason, it is important for organizations to establish a clear and comprehensive corporate policy governing the use of generative artificial intelligence tools.

Policies and Guidelines: In practice, the policies and guidelines adopted by some organizations allow the use of publicly available generative artificial intelligence tools for certain purposes, provided that no personal data, trade secrets, or institutionally sensitive information are included. For example, certain uses may be accepted within defined limits, such as supporting idea development processes, reviewing texts from a linguistic perspective, or summarizing content available on the internet. On the other hand, the sharing of sensitive information—such as customer files, human resources data, or internal correspondence—with generative artificial intelligence tools is generally among the types of use that are prohibited under such policies.

Sensitivity of Personal Data: On the other hand, when using generative artificial intelligence tools, employees should adopt a particularly cautious approach with respect to personal data and institutionally sensitive information. When sensitive data such as health data, financial information, or information relating to legal proceedings are involved, it is important to exercise greater caution in the use of such tools.

Nature of the Data Shared with Generative Artificial Intelligence Tools: The Document also addresses the issues that should be taken into consideration regarding the nature of the data shared during interactions with generative artificial intelligence tools. Accordingly, since personal data may be processed through the prompts provided by users as inputs to generative artificial intelligence tools, it is important to observe the obligations related to the protection of personal data during these processes. In this context, sharing information that can directly or indirectly identify a person through such tools may give rise to various risks. For this reason, the Document states that, during interactions with generative artificial intelligence tools, preferring anonymized, generalized, and abstract expressions wherever possible would constitute a more prudent approach in terms of personal data protection. For example, the use of more general expressions instead of distinctive elements such as specific names of individuals, dates, or locations is recommended.

Critical and Inquisitive Approach: The Document also indicates that excessive reliance on outputs generated by generative artificial intelligence tools constitutes a significant risk. This situation, referred to in the literature as automation bias, may lead users to accept the results produced by automated systems as accurate without subjecting them to adequate evaluation. For this reason, it is important that AI-generated outputs be reviewed through human oversight and critical assessment processes.

Technical and Administrative Measures: Finally, in order to ensure the secure use of generative artificial intelligence tools, technical and administrative measures related to data security and access control should be evaluated. Approaches that enable employees to access only the tools designated by the organization and whose conditions of use are clearly defined will contribute to reducing uncontrolled forms of use. In addition, raising awareness among employees and conducting training activities on the secure use of such tools are also considered important elements.

  • Conclusion

Generative artificial intelligence tools bring along various risks in many areas, including data security, protection of personal data, intellectual property rights, accuracy of decisions, and institutional reputation. Nevertheless, considering the speed and efficiency advantages offered by these technologies, completely eliminating or prohibiting their use in different aspects of business and private life is not regarded as a realistic approach in practice. Indeed, completely banning such tools may lead employees to access these technologies through alternative means and outside institutional control, thereby increasing the risk of shadow artificial intelligence usage. For this reason, it is important for organizations to adopt a governance framework that prioritizes the controlled and responsible use of generative artificial intelligence tools rather than a purely prohibitive approach. In this context, it is of great importance for companies to establish clear institutional policies, conduct awareness and training activities for employees, provide the necessary information regarding the protection of personal data, evaluate AI-generated outputs through a critical approach, and implement technical and administrative measures aimed at ensuring data security.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More