Are AI legal helpers fair? Early evidence from a Czech custody scenario
As more people turn to AI for legal self-help, a new study asks a simple question: do today’s large language models treat mothers and fathers the same in shared-parenting advice?
Researchers designed a realistic Czech divorce scenario and queried four leading models in a zero-shot setup. They ran two versions—one with gendered names and one with neutral labels—and varied nine legally relevant factors to see how recommendations for parenting time shifted.
- Outcomes differed across models.
- Some systems showed gender-dependent patterns in suggested parenting ratios.
- Findings are preliminary and descriptive, not proof of causation.
Why it matters: Laypeople may take AI answers at face value, even when outputs are incomplete, incorrect, or biased. The authors call for stronger evaluations and safeguards before relying on AI in sensitive legal contexts.
Paper by Jakub Harasta, Matej Vasina, Martin Kornel, and Tomas Foltynek. Read more: https://arxiv.org/abs/2601.05879v1
Paper: https://arxiv.org/abs/2601.05879v1
Register: https://www.AiFeta.com
#AI #LLM #GenderBias #FamilyLaw #LegalTech #AccessToJustice #CzechLaw #AIEthics