菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-08
📄 Abstract - Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers

Applying weight decay (WD) to matrix layers is standard practice in large-language-model pretraining. Prior work suggests that stochastic gradient noise induces a Brownian-like expansion of the weight matrices W, whose growth is counteracted by WD, leading to a WD-noise equilibrium with a certain weight norm ||W||. In this work, we view the equilibrium norm as a harmful artifact of the training procedure, and address it by introducing learnable multipliers to learn the optimal scale. First, we attach a learnable scalar multiplier to W and confirm that the WD-noise equilibrium norm is suboptimal: the learned scale adapts to data and improves performance. We then argue that individual row and column norms are similarly constrained, and free their scale by introducing learnable per-row and per-column multipliers. Our method can be viewed as a learnable, more expressive generalization of muP multipliers. It outperforms a well-tuned muP baseline, reduces the computational overhead of multiplier tuning, and surfaces practical questions such as forward-pass symmetries and the width-scaling of the learned multipliers. Finally, we validate learnable multipliers with both Adam and Muon optimizers, where it shows improvement in downstream evaluations matching the improvement of the switching from Adam to Muon.

顶级标签: model training llm theory
详细标签: weight decay learnable multipliers optimization parameter scaling language model pretraining 或 搜索:

可学习的乘数:释放语言模型矩阵层的尺度 / Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers


1️⃣ 一句话总结

这篇论文提出了一种为语言模型的矩阵层引入可学习乘数的新方法,通过自动优化权重尺度来替代传统权重衰减导致的次优平衡,从而在不同优化器下都提升了模型性能。

源自 arXiv: 2601.04890