利用混合LoRA改进递归Transformer模型 / Improving Recursive Transformers with Mixture of LoRAs
1️⃣ 一句话总结
这篇论文提出了一种名为MoL的轻量级方法,通过在共享网络中插入可动态选择的低秩适配器,成功解决了递归Transformer因参数共享而导致的表达能力下降问题,使得小模型也能达到甚至超越大模型的性能,并且推理时还能压缩成一个高效模块。
Parameter sharing in recursive transformers reduces model size but collapses layer-wise expressivity. We propose Mixture of LoRAs (MoL), a lightweight conditional-computation mechanism that inserts Low-Rank Adaptation (LoRA) experts inside a shared feed-forward network (FFN). MoL enables token-conditional weight-space modulation of the shared FFN without untying backbone parameters, unlike prior approaches that add fixed or externally attached adapters. We pretrain a modernised recursive architecture, ModernALBERT, integrating rotary embeddings, GeGLU, FlashAttention, and a distillation-based initialisation. Across GLUE, SQuAD-v2, and BEIR, ModernALBERT (50M--120M) achieves state-of-the-art performance among compact models and surpasses larger fully parameterised baselines. We also propose an expert-merging procedure that compresses MoL into a single adapter at inference while preserving accuracy, enabling efficient deployment. Our results show that conditional weight-space modulation effectively restores the expressivity lost under aggressive parameter sharing in recursive transformers.
利用混合LoRA改进递归Transformer模型 / Improving Recursive Transformers with Mixture of LoRAs
这篇论文提出了一种名为MoL的轻量级方法,通过在共享网络中插入可动态选择的低秩适配器,成功解决了递归Transformer因参数共享而导致的表达能力下降问题,使得小模型也能达到甚至超越大模型的性能,并且推理时还能压缩成一个高效模块。
源自 arXiv: 2512.12880