When AI Makers Could Be Criminally Liable in Germany for CSAM Outputs
Can creators of generative AI face criminal charges if users make child sexual abuse material (CSAM) with their models? A new multidisciplinary study of German law says: sometimes, yes.
The authors examine realistic scenarios and find that liability may extend beyond the user to independent developers, researchers, and company representatives—depending on context and design choices.
- What’s generated: how lifelike the image is and whether minors are depicted.
- Model intent: stated purpose, training data, and known risks.
- Safeguards: content filters, monitoring, and response to abuse reports.
- Distribution: whether outputs are shared or enabled at scale.
Key takeaway: building GenAI without robust safety controls can create criminal exposure in Germany, not just PR risk. The paper outlines implications for different roles and urges clear policies, technical guardrails, documentation, and rapid takedown processes.
Read more: https://arxiv.org/abs/2601.03788
This summary is informational and not legal advice.
Paper: https://arxiv.org/abs/2601.03788v1
Register: https://www.AiFeta.com
AI law safety Germany CSAM GenAI techpolicy responsibleAI