Artificial intelligence
AI-generated content poses an emerging challenge user and creators as it potentially blurs the lines between what is considered original and what is AI-generated. Declaring original creative works and attributing them to a creative human could help creators to stand out and to distinguish it from AI-generated content.
AI Training
The current European legislation (→ Article 4, EU DSM Directive on Copyright) allows AI providers to use copyright protected works for the purpose of training AI/LLM. Binding a machine-readable opt-out declaration can prevent their content from being used as AI training data. Eventually, the opt-out has the purpose to facilitate a functional licensing market (opt-in) for works.
Transparency obligations
Re/ Input (Upstream)
According to the EU AI-Act, providers of AI system and creators of synthetic media are required to provide "sufficiently detailed summary" of all copyrighted works utilised for the training of their models (Article 52 1. d). To comply with this regulatory requirement, providers of AI system need to identify, list and transparently declare all assets before the content is ingested into their systems and make sure that no opt-out policy prohibits its use.
Output (Downstream)
Article 50, 2 requires that "providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state-of-the-art, as may be reflected in relevant technical standards."
Preventing model collapse
Providers of AI systems must identify and publicly declare AI-generated content not only for transparency reasons, but in their own self-interest. AI system providers require high-quality, human- generated content and must refrain from training their models on synthetic media. As a result, transparency regarding AI-generated content is not merely a legal obligation but also a critical necessity to avoid entropy, commonly referred to as model collapse.
Last updated