Artificial intelligence

AI-generated content poses an emerging challenge and threat for creators and rightsholders as it potentially blurs the lines between what is considered original and what is AI-generated. As machine generated content is not protected by copyright under current legal regimes, unclear ownership and attribution is an essential topic in the development of the technology.

The unauthorised use of copyrighted works contained in training data is problematic in itself. It may cause illegal replications of existing works, which could lead to accusations of infringement or plagiarism. E.g., in cases where an AI model trained on a dataset of existing paintings may generate a new painting that represents an edited version but that resembles a pre-existing work, leading to questions about originality and ownership.

Declaring original creative works and attributing them to a creative human could help to distinguish it from AI-generated content.

Transparency obligations

As a response to demands of the creative communities worldwide and in alignment with the upcoming EU AI-Act, providers of AI system and creators of synthetic media may soon be required to provide detailed lists of all copyrighted works utilised for the training of their models.

To comply with this regulatory requirement, providers of AI system need to identify, list and transparently declare all assets before the content is ingested into their systems and make sure that no opt-out policy prohibits its use.

Preventing model collapse

AI system providers require high-quality, human- generated content and must refrain from training their models on synthetic media. As a result, transparency regarding AI-generated content is not merely a legal obligation but also a critical necessity to avoid entropy, commonly referred to as model collapse.

Providers of AI systems must identify and publicly declare AI-generated content not only for transparency reasons, but in their own self-interest.

Last updated