菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - $μ$pscaling small models: Principled warm starts and hyperparameter transfer

Modern large-scale neural networks are often trained and released in multiple sizes to accommodate diverse inference budgets. To improve efficiency, recent work has explored model upscaling: initializing larger models from trained smaller ones in order to transfer knowledge and accelerate convergence. However, this method can be sensitive to hyperparameters that need to be tuned at the target upscaled model size, which is prohibitively costly to do directly. It remains unclear whether the most common workaround -- tuning on smaller models and extrapolating via hyperparameter scaling laws -- is still sound when using upscaling. We address this with principled approaches to upscaling with respect to model widths and efficiently tuning hyperparameters in this setting. First, motivated by $\mu$P and any-dimensional architectures, we introduce a general upscaling method applicable to a broad range of architectures and optimizers, backed by theory guaranteeing that models are equivalent to their widened versions and allowing for rigorous analysis of infinite-width limits. Second, we extend the theory of $\mu$Transfer to a hyperparameter transfer technique for models upscaled using our method and empirically demonstrate that this method is effective on realistic datasets and architectures.

顶级标签: model training theory machine learning
详细标签: model upscaling hyperparameter transfer mu transfer neural network initialization width scaling 或 搜索:

高效扩展小模型:基于原理的预热启动与超参数迁移 / $μ$pscaling small models: Principled warm starts and hyperparameter transfer


1️⃣ 一句话总结

这篇论文提出了一种基于理论保证的模型扩展方法,能够将训练好的小模型高效地扩展为更大的模型,并配套开发了一种超参数迁移技术,使得扩展后的大模型无需重新调参就能获得良好的性能,从而大幅节省了计算成本。

源自 arXiv: 2602.10545