The European Commission has published the first draft of the Code of Practice on Transparency of AI-generated Content. The Code provides guidelines on demonstrating compliance with the disclosure obligations, while recognizing that adherence to the Code does not constitute conclusive evidence of compliance with these obligations. The Code aims to balance legal obligations with technical feasibility, acknowledging that no single technique currently meets all requirements for effectiveness, robustness, and interoperability. The obligations under Article 50 of the EU AI Act are set to become applicable in August 2026.
This Transparency Code of Conduct applies to companies Providing AI systems generating synthetic audio, video, image or text content (including GPAI) and deployers of such AI Systems, when the AI is either “placed in the EU market”, or “put into service” in the EU, which will trigger the applicability of Article 50 of the AIA. The Code signals a shift from “principle-based” transparency to specific technical requirements involving multi-layered watermarking, metadata standards, and user-facing icons.
Who’s Affected?
The Code targets two groups: providers of generative AI systems (to the extent the AI system is places in the EU marks), who must implement technical marking and offer free detection tools; and deployers using AI to create content published in the EU, who must ensure proper labelling. The Code explicitly emphasizes proportionality-startups and SMEs will be held to measures appropriate to their size and resources.
Despite certain exemptions for open-weight models in the AI Act, the draft Code suggests that such models should implement structural marking techniques encoded in the weights during training to facilitate downstream compliance. This will facilitate third parties who use these open-weight models or systems to build generative AI systems to comply with the requirements.
What Type of Content is Considered “Generated by AI”?
Clear labelling is mandated for AI-generated or manipulated image, audio, or video content that constitutes a deepfake AI-generated.
The key requirement is meant to ensure that natural persons can recognize whether they are interacting with an AI system or are being confronted with AI-generated or AI-manipulated content. At the same time, the standard takes into account the fact that transparency requirements are context-dependent and cannot or should not apply with the same intensity in every case.
The Code proposes to establish a taxonomy for determining what content is “deepfake” content, indicating it may include “fully AI-generated” content (autonomously generated) and “AI-assisted” content (human-authored but AI-modified, which is defined no exhaustively to include actions like “face/voice replacement or modification” and “seemingly small AI-alterations,” such as “colour adjustments that change contextual meaning (e.g. skin tone)”).
The Code contains special provisions for AI-generated or manipulated texts on matters of public interest. Deployers must disclose such texts as a matter of principle, unless they have been subject to human review or editorial control and a natural or legal person bears editorial responsibility (“Editorial Exemption”). Deployers must keep internal documentation of their labelling practices and, when relying on the Editorial Exemption, retain specific logs identifying the human reviewer and date of approval. Further, it is clear from the recitals in the Code that the Editorial Exemption needs to show real work and not minor tweaks.
Key Provisions – Providers:
The Code adopts a ‘multilayered’ approach to watermarking as “machine-readable format”, which includes different layers, such as – watermark may refer to a metadata identifier or vice versa. The goal is to ensure the labeling is not removed or manipulated. The Code introduces various solutions for marking the output by using logging or fingerprinting to verify outputs and identify content even if marks are removed or degraded, interwovenwatermarking and imbedding hidden data directly into the content (e.g., pixel-level modifications) that can resist typical processing like compression or cropping, digital signatures within the content, etc. Providers should establish logging facilities or fingerprinting to verify outputs. The draft Code does not, however, endorse a specific standard. It is further clarified that such techniques can be provided by third-party service providers or internally developed by provider.
Providers of generative AI systems will provide a functionality in their system’s interface and implement an integrated option that allows deployers to directly – upon generation of the output – include a perceptible mark or label in the content enabled by default.
The Code also suggests that providers should implement “detectors” for use by users and third parties (e.g. via API or a user interface). The draft Code suggests that detectors implemented by providers of GPAI models that can be integrated into downstream services should not simply be limited to detecting embedded watermarks but also unmarked synthetic content.
Providers shall ensure and implement appropriate measures to preserve marks and other intrinsic provenance signals on AI-generated or manipulated content by technically ensuring that existing detectable marks are retained and not altered or removed, including where such content is used as input and subsequently transformed by their AI system into a new output. In addition, provider shall contractually ensure prohibition for removal or tampering of the marks by deployers or provider’s generative AI system or model, or any other third parties. Hence, revisions to contracts and acceptable use policies shall be considered is an action item as well.
Key Provisions – Deployers:
Deployers should maintain internal compliance documentation, train employees, and establish mechanisms for reporting and correcting incorrect or omitted labels. The Code also emphasizes the importance of accessibility: disclosures must also be perceptible to people with disabilities, for example through alternative text descriptions, audio cues, or sufficient visual contrasts.
The Code outlines different labelling placement rules for different types of content. In general, a labelling icon must be clear and distinguishable at the “first exposure”. For real-time video, it must be displayed persistently “where feasible”; for audio, there are requirements for audible disclaimers.
The Code offers flexibility for artistic or satirical works, allowing for “non-intrusive” placement that does not hamper the enjoyment of the work.
Beyond technical marking, the Code proposes a unified visual language for European audiences: a standardized “AI” icon to be displayed whenever content is generated or meaningfully edited by AI. Deepfake videos will require the icon to be displayed continuously, and text on matters of public interest must be labelled unless it has undergone full human editorial review.
Full draft: First Draft Code of Practice
Want to understand how this affects your operations? We’d be happy to discuss.








