Do AI Models Lean Left? Auditing Bias with Real Parliament Votes
AI is shaping what we read and decide. But do large language models carry hidden political leanings?
A new study proposes a transparent way to audit political bias: ask models to predict how parliaments voted, then compare those predictions to the real records.
- Benchmarks: PoliBiasNL (2,701 Dutch motions; 15 parties), PoliBiasNO (10,584 Norwegian motions; 9 parties), PoliBiasES (2,480 Spanish motions; 10 parties).
- Shared map: models and parties are placed in a two-dimensional CHES ideology space for easy, apples-to-apples comparison.
Across countries, the authors report fine-grained patterns: state-of-the-art LLMs tend to appear left-leaning or centrist, with clear negative biases toward right-conservative parties.
Why it matters: grounding evaluations in real parliamentary behavior makes cross-national auditing of political bias more interpretable and harder to game.
Paper by Jieying Chen, Karen de Jong, Andreas Poole, Jan Burakowski, Elena Elderson Nosti, Joep Windt, and Chendi Wang. Read more: https://arxiv.org/abs/2601.08785v1
Paper: https://arxiv.org/abs/2601.08785v1
Register: https://www.AiFeta.com
AI LLM political-bias NLP transparency accountability benchmarking Netherlands Norway Spain