AI that stays on track for days: ML-Master 2.0

AI that stays on track for days: ML-Master 2.0

AI is great at quick tasks—but stumbles on week-long projects. This paper tackles that ultra-long-horizon gap.

Meet ML-Master 2.0, an autonomous agent for machine learning engineering that stays strategically coherent over days. Its core idea, Hierarchical Cognitive Caching (HCC), treats memory like a multi-level cache and a lab notebook: it condenses messy run-by-run traces into stable know-how and cross-task wisdom, so the agent can execute now while planning for later.

  • Short term: keep only what’s needed to act.
  • Mid term: distill repeats into playbooks.
  • Long term: carry lessons across tasks.

On OpenAI’s MLE-Bench with 24-hour budgets, ML-Master 2.0 sets a new state of the art: a 56.44% medal rate. That’s a concrete step toward agents that can run experiments, learn from sparse feedback, and improve over many cycles—without drowning in context.

Paper: https://arxiv.org/abs/2601.10402v1

Paper: https://arxiv.org/abs/2601.10402v1

Register: https://www.AiFeta.com

AI AutonomousAgents MachineLearning AgenticScience LongHorizon Memory HCC Research arXiv MLEBench MLMaster2

Read more