Brussels, BELGIUM – The fourth European AI Roundtable brought together leading experts in Brussels yesterday to discuss the critical transparency requirements of the EU AI Act’s Article 50, which become mandatory in August 2026.
The timely discussion, hosted by the Computer & Communications Industry Association (CCIA Europe), came shortly after the European Commission began drafting the Code of Practice defining compliance criteria for Article 50. From next year, AI-generated content must be marked and detectable, and ‘deep fakes’ must be clearly labelled as such.
Experts spoke about how to ensure the Code meets the AI Act’s goals, without causing ‘labelling fatigue’ among Europeans or locking in quickly outdated technical requirements.
The timing is tight: the drafting process, launched last month, is expected to take 10 months, leaving at best two months before the obligations come into effect. Delays in the previous Code created legal uncertainty, with the Commission missing its own deadlines.
The event also marked the launch of a new study, ‘Transparency Obligations for All AI Systems: Article 50 of the AI Act’, by Professor Joan Barata. The study argues the Code should focus only on systems posing real risks of impersonation or deception, and should exempt assistive tools – like spell checkers, image resizing and cropping, or common audio edits – from labelling obligations in order to preserve public trust and the Act’s effectiveness.
Another concern raised during the Roundtable is the technical trade-off posed by the AI Act’s requirement that watermarks must be both robust, so hard to remove, as well as easy to detect across different systems at the same time. No single technical solution currently meets both requirements, experts pointed out.
CCIA Europe urges the AI Office and those involved in drafting the Code of Practice to take these warnings seriously. The Code must ensure the AI Act’s transparency goals are met without causing information overload, overly prescriptive technical rules, or reduced usability.
The following can be attributed to CCIA Europe’s Senior Policy Manager, Boniface de Champris:
“Correct implementation of the AI Act’s Article 50 is crucial to avoid counterproductive outcomes for European users. Excessive labelling can cause ‘banner blindness’ with endless notifications. If we have to label everything, from simple spell-checked emails to photos with a filter, the labelling of AI content will lose all meaning.”
“At the same time, technical requirements must remain flexible and outcome-focused to keep pace with technology. That’s why the Code of Practice must avoid prescribing rigid technology-specific solutions that risk becoming obsolete quickly.”
Notes for editors
Article 50 is the transparency section of the EU AI Act. It sets out rules to clarify when and how AI is being used. AI systems that interact with users, such as chatbots, must disclose they are automated unless this is obvious. Generative AI (including tools that produce audio, images, video, or text) must carry machine-readable watermarks, and any AI-generated deepfakes must be clearly labelled. Finally, AI applications analysing human emotions, such as facial expressions in the workplace or shops, must notify users in advance.