LoRA on the Go: Instance-level Dynamic LoRA Selection and Merging
TL;DR
LoRA adapters are small plug-in modules that fine-tune big language models cheaply. But one adapter per task doesn’t fit messy, real-world inputs.
What’s new
LoRA on the Go (LoGo) lets a model pick and blend the best adapters for each individual input—no labels, no extra training, no slowdown.
How it works
- Runs a single forward pass through available LoRA adapters.
- Uses simple signals to judge which adapters matter.
- Dynamically selects and merges them on-the-fly.
Why it matters
- Training-free and plug-and-play.
- Handles mixed, unpredictable tasks.
- Keeps inference throughput.
- Across 5 benchmarks, 27 datasets, 3 model families: competitive overall and up to 3.6% better than training-heavy methods on some tasks.
Paper: http://arxiv.org/abs/2511.07129v1
Paper: http://arxiv.org/abs/2511.07129v1
Register: https://www.AiFeta.com
AI LLM LoRA NLP MachineLearning EfficientAI Inference Research