Who’s Liable When GenAI Produces Illegal CSAM? A German Law Perspective
New research examines how German criminal law could apply when generative AI is used to create child sexual abuse material (CSAM). The takeaway: legal risk isn’t limited to the user—it can reach model developers, researchers, and company representatives.
The authors combine legal analysis with scenarios to show that liability may depend on:
- The type of image generated (e.g., realism and whether real persons are involved)
- The system’s intended purpose and design choices
- The strength of content moderation and how misuse reports are handled
- What providers knew, could foresee, and controlled
Why it matters: In Germany, responsibility for preventing CSAM generation can extend across the AI lifecycle. The paper urges builders and providers to implement robust safeguards, document intent and policies, and respond decisively to abuse.
This post is informational and not legal advice. Read the study: https://arxiv.org/abs/2601.03788v1
Paper: https://arxiv.org/abs/2601.03788v1
Register: https://www.AiFeta.com
AI Law Safety ChildProtection Germany GenAI Policy TrustAndSafety Ethics