AI Persuasion Isn’t Neutral: LLMs Shift Style by Recipient Gender
New research shows that popular language models don’t persuade everyone the same way. Across 13 LLMs and 16 languages, the authors found consistent shifts in tone, appeals, and strategies depending on the recipient’s labeled gender — patterns that mirror well-known gender stereotypes.
- Framework tests 19 categories of persuasive language (e.g., warmth vs. dominance, empathy, authority).
- Differences persist across models, intents (asking a favor vs. apologizing), and output languages.
- An LLM-as-judge evaluation, grounded in social psychology, flags significant gender-linked variation.
Why it matters: Persuasive AI is already shaping emails, ads, and support scripts. If models default to stereotype-aligned tactics, they can reinforce unequal treatment at scale.
What to do now: audit prompts and outputs by audience attributes; specify neutral, evidence-based tone; diversify examples in instructions; add human review for high-stakes messaging.
Paper: https://arxiv.org/abs/2601.05751v1
Paper: https://arxiv.org/abs/2601.05751v1
Register: https://www.AiFeta.com
AI LLM NLP Persuasion Bias GenderBias ResponsibleAI Ethics