Self-Anchor: keep LLMs focused through long reasoning
When chains get long, LLMs lose the thread. Self-Anchor pins it down. Self-Anchor structures a plan from the reasoning trajectory and automatically aligns the model’s attention to the most relevant steps while generating. Across six benchmarks, it outperforms state-of-the-art prompting and narrows the gap between general and specialized reasoning