接近最优的学习率调度策略长什么样? / What do near-optimal learning rate schedules look like?
1️⃣ 一句话总结
这篇论文通过系统性的搜索方法,发现神经网络训练中接近最优的学习率调度策略普遍包含预热和衰减阶段,并且其具体形状会受到权重衰减等超参数的显著影响。
A basic unanswered question in neural network training is: what is the best learning rate schedule shape for a given workload? The choice of learning rate schedule is a key factor in the success or failure of the training process, but beyond having some kind of warmup and decay, there is no consensus on what makes a good schedule shape. To answer this question, we designed a search procedure to find the best shapes within a parameterized schedule family. Our approach factors out the schedule shape from the base learning rate, which otherwise would dominate cross-schedule comparisons. We applied our search procedure to a variety of schedule families on three workloads: linear regression, image classification on CIFAR-10, and small-scale language modeling on Wikitext103. We showed that our search procedure indeed generally found near-optimal schedules. We found that warmup and decay are robust features of good schedules, and that commonly used schedule families are not optimal on these workloads. Finally, we explored how the outputs of our shape search depend on other optimization hyperparameters, and found that weight decay can have a strong effect on the optimal schedule shape. To the best of our knowledge, our results represent the most comprehensive results on near-optimal schedule shapes for deep neural network training, to date.
接近最优的学习率调度策略长什么样? / What do near-optimal learning rate schedules look like?
这篇论文通过系统性的搜索方法,发现神经网络训练中接近最优的学习率调度策略普遍包含预热和衰减阶段,并且其具体形状会受到权重衰减等超参数的显著影响。
源自 arXiv: 2603.10301