Can Europe watermark AI? What the AI Act really asks for

Can Europe watermark AI? What the AI Act really asks for

Europe's AI Act says makers of general-purpose AI must mark and detect AI-generated content - and those marks must be reliable, interoperable, effective, and robust. But what does that mean in practice?

A new paper by Thomas Souverain maps today's watermarking tech for large language models (LLMs) to the law's demands.

  • Clear taxonomy: Classifies watermarking by when it's applied: before/during/after training, and at next-token distribution vs. sampling.
  • Evaluation guide: Translates the Act's criteria into tests for robustness, detectability, and model quality; proposes three dimensions to assess interoperability.
  • Reality check: No current method meets all four standards; promising direction is watermarking embedded deep in model architecture.

Why it matters: Policymakers, developers, and auditors get a shared language and test plan to judge watermarking claims under the EU AI Act.

Paper: http://arxiv.org/abs/2511.03641v1

Paper: http://arxiv.org/abs/2511.03641v1

Register: https://www.AiFeta.com

#AIAct #Watermarking #LLM #AI #TrustworthyAI #EU #Safety

Read more