LoRA on the Go: Training-Free, On-the-Fly Adapter Mixing for LLMs

LoRA on the Go: Training-Free, On-the-Fly Adapter Mixing for LLMs

TL;DR

LoGo lets large language models pick and blend the right LoRA adapters for each input—no extra training, labels, or task setup.

  • Training-free: uses signals from a single forward pass to select and weight adapters.
  • Instance-level: decisions happen on-the-fly for every query.
  • Practical: keeps inference throughput while handling diverse, unpredictable domains.
  • Effective: across 5 NLP benchmarks, 27 datasets, 3 model families; up to 3.6% gains over training-based baselines, competitive elsewhere.

Why it matters: Real-world inputs aren’t neatly labeled. Instead of betting on one task-specific LoRA, LoGo dynamically assembles the best mix for whatever comes its way.

Authors: Seungeon Lee, Soumi Das, Manish Gupta, Krishna P. Gummadi. Paper: http://arxiv.org/abs/2511.07129v1

Paper: http://arxiv.org/abs/2511.07129v1

Register: https://www.AiFeta.com

#LLM #LoRA #NLP #AI #MachineLearning #Adapters #Inference #Research

Read more