Can AI mirror how people cooperate? A new study puts it to the test

Can AI mirror how people cooperate? A new study puts it to the test

How human are today’s AI models when making social decisions? This study builds a "digital twin" of classic game-theory experiments and checks whether large language models (LLMs) act like people.

  • Llama closely replicates human cooperation patterns, including the ways people deviate from strict rational-choice rules.
  • Qwen behaves more like textbook game theory (Nash equilibrium), sticking to strategic "perfect rationality."
  • Models varied in how "human-like" or "rational" their choices were.
  • No persona prompts were needed to match population-level behavior, simplifying simulations.
  • The team also preregistered predictions for new, untested game settings, extending the original experiments.
Calibrated LLMs can reproduce aggregate human behavior and help explore social experiments before running them with people.

Why it matters: If models don’t mirror real decisions, they can mislead policy, health, or education tools. But when aligned, they become powerful, low-cost partners for testing ideas about cooperation and conflict.

Paper: http://arxiv.org/abs/2511.04500v1

Paper: http://arxiv.org/abs/2511.04500v1

Register: https://www.AiFeta.com

AI LLM GameTheory Cooperation BehavioralScience SocialScience Research Replication

Read more