- in United States
- within Transport, Media, Telecoms, IT, Entertainment and Family and Matrimonial topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
Under the EU AI Act (the Act), transparency obligations around AI‑generated content are no longer a matter of high‑level principle, but ofengineering, governance and evidentiary readiness. While the core idea - that users should know when they are interacting with AI‑generated or AI‑manipulated content - may appear, at first glance, deceptively simple, implementing these obligations in practice raises complex technical, organisational and legal questions for both AI providers and deployers.
The second draft of the Code of Practice on Transparency of AI‑Generated Content, (the Code) published on 3 March 2026 and open for stakeholder feedback until 30 March 2026, represents a decisive attempt to bridge the gap between legal obligation and technical reality. Developed under the supervision of the European Commission through a broad multi‑stakeholder process, the Code aims to operationalise Articles 50(2) to (5) of the Act by providing practical and technical guidance for real‑world implementation.
Crucially for organisations, the Code is no longer exploratory. Compared to the first draft, it moves decisively away from high‑level principles and open questions towards prescriptive, technically detailed commitments, narrowing discretion and signalling how regulators are likely to assess compliance in practice.
Although formally voluntary and applicable only to its signatories, the Code is clearly designed to become the de facto compliance benchmark for both AI providers and deployers ahead of the transparency rules becoming applicable on 2 August 2026. As with other EU "voluntary" instruments, such as the "The General-Purpose AI Code of Practice" (see our previous blogposts: Transparency requirements re training data and compliance with copyright law come into force in EU and Trade secrets in the AI era: Navigating transparency under the EU AI Act) adherence (or divergence) is likely to carry real evidentiary weight in regulatory investigations and litigation.
ALLOCATING TRANSPARENCY OBLIGATIONS ALONG THE AI VALUE CHAIN
The Code mirrors Article 50 of the AI Act by structuring transparency obligations along the AI value chain:
- Section 1 addresses providers of generative AI systems, focusing on machine‑readable marking and detection of AI‑generated or manipulated content (Article 50(2) and (5));
- Section 2 targets deployers, imposing clear and distinguishable labelling obligations for deepfakes and certain AI‑generated texts intended to inform the public (Article 50(4) and (5)).
This allocation reflects a core principle of the Act: transparency is a shared obligation. For organisations, this has direct implications for governance design, contractual allocation of responsibility and enforcement exposure.
KEY OBLIGATIONS FOR PROVIDERS: FROM AI-GENERATED OUTPUTS TO CONTENT PROVENANCE
Multi‑layered marking as the default standard
For providers, the Code makes clear that no single marking technique is sufficient to meet the requirements of Article 50 of the Act on its own.
Given the current state of technological protections like this, providers are expected to rely on a multi‑layered marking strategy combining, where technically feasible:
- Digitally signed metadata indicating AI generation or manipulation;
- Imperceptible watermarking embedded directly into the content;
- Optional fingerprinting or logging mechanisms as a fallback, particularly for short or heavily transformed outputs.
This approach is presented as the most credible way to meet the four cumulative requirements set out in Article 50(2): effectiveness, reliability, robustness and interoperability.
While alternative approaches are not excluded in principle, the Code deliberately sets a high evidentiary bar. Providers opting out of the Code's baseline model will need to demonstrate, on the basis of independently verified benchmarks, that their solutions achieve at least an equivalent performance across all four criteria – a standard that, in practice, strongly incentivises alignment with the Code.
Preserving markings and enabling detection
Transparency does not stop at generation. Providers are also expected to preserve provenance information throughout the content lifecycle and to prevent deliberate removal or alteration of markings, including through contractual and policy‑based safeguards.
Marking alone is not enough. Providers must also make detection mechanisms available free of charge – via APIs, interfaces or public tools – enabling deployers, users and third parties to verify content provenance. Detection results must be clear, accessible (notably for persons with disabilities) and sufficiently informative, including confidence indicators where feasible.
Taken together, these requirements turn transparency into an ongoing operational obligation, rather than a one‑off compliance exercise.
KEY OBLIGATIONS FOR DEPLOYERS: CLEAR LABELLING IN CONTEXT
Uniform principles, contextual execution
Deployers face a different, but equally operational, challenge: disclosing AI involvement in a clear and proportionate manner, without detracting from the fluidity, aesthetic quality, accessibility and user‑friendliness of the content.
The Code responds with detailed rules on design and placement of labels, icons and disclaimers, including:
- A uniform "AI" visual cue recognisable across the EU;
- Short explanatory text, where appropriate (e.g., "Generated with AI", "Manipulated with AI");
- Detailed accessibility and readability standards (size, contrast, readability and accessibility);
- Modality‑specific requirements for text, images, audio, video and live or real‑time content (e.g., repeated disclosure for long‑form audio or video, or audible signals for audio‑only formats).
The emphasis is not on theoretical transparency, but on first‑exposure disclosure: users must be informed at the moment they encounter the content, not through notices buried in terms and conditions or secondary interfaces.
Deepfakes versus AI‑generated text of public interest
The Code also clarifies and operationalises a key distinction already present in the AI Act:
- Deepfake image, audio or video content must be disclosed as artificially generated or manipulated, subject to narrow exceptions such as lawful use by authorities or proportionate treatment of artistic and fictional works.
- AI‑generated or AI‑manipulated text published to inform the public must be disclosed unless it has undergone genuine human review and a natural or legal person assumes editorial responsibility. Deployers relying on this editorial exception are expected to maintain documented procedures evidencing human oversight, raising the bar for informal or ad hoc review processes.
VOLUNTARY IN NAME, INFLUENTIAL IN PRACTICE
Despite its non‑binding status, the Code is likely to operate as a practical benchmark for compliance:
- Adherence is likely to be treated by courts and regulators as strong evidence of good faith and diligence;
- Deviations will need to be justified, documented and defensible;
- Cross‑border consistency pressures may quickly turn the Code into a market standard, not only within the EU but also for global actors active in the European market.
WHAT ORGANISATIONS SHOULD BE DOING NOW
With the August 2026 deadline approaching, organisations should already be moving from awareness to implementation:
- Providers should assess whether their current systems support multi‑layered marking and scalable detection mechanisms, and whether reliance on upstream or third‑party solutions is legally and operationally robust;
- Deployers should map their content creation workflows to identify where labelling obligations arise in practice;
- Both should embed transparency into broader governance frameworks, ensuring alignment with IP and trade secrets protection strategies.
OUTLOOK FOR THE FUTURE
A third and final version of the Code of Practice is expected by June 2026. The Commission has expressly invited further stakeholder input and is likely to further refine technical benchmarks, labelling standards, and enforcement expectations.
The direction of travel is nonetheless clear. Under the Act, transparency is treated as foundational infrastructure. Design choices, interfaces and governance processes carry direct regulatory relevance and increasing importance in IP enforcement and litigation contexts.
For AI stakeholders, the focus is therefore on how to strategically embed and document transparency in a way that withstands regulatory scrutiny and litigation, and how to align those efforts with broader IP governance frameworks and litigation readiness.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.