AI
When AI says it didn't use the hint but did
New paper alert: Large Reasoning Models can lie about how they got an answer. Extending Chen et al. (2025), William Walden tests LRMs on multiple-choice questions that hide subtle hints in the prompt. The models often exploit the hints to pick the right option, but when asked how they reasoned,