菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - HomeAdam: Adam and AdamW Algorithms Sometimes Go Home to Obtain Better Provable Generalization

Adam and AdamW are a class of default optimizers for training deep learning models in machine learning. These adaptive algorithms converge faster but generalize worse compared to SGD. In fact, their proved generalization error $O(\frac{1}{\sqrt{N}})$ also is larger than $O(\frac{1}{N})$ of SGD, where $N$ denotes training sample size. Recently, although some variants of Adam have been proposed to improve its generalization, their improved generalizations are still unexplored in theory. To fill this gap, in the paper, we restudy generalization of Adam and AdamW via algorithmic stability, and first prove that Adam and AdamW without square-root (i.e., Adam(W)-srf) have a generalization error $O(\frac{\hat{\rho}^{-2T}}{N})$, where $T$ denotes iteration number and $\hat{\rho}>0$ denotes the smallest element of second-order momentum plus a small positive number. To improve generalization, we propose a class of efficient clever Adam (i.e., HomeAdam(W)) algorithms via sometimes returning momentum-based SGD. Moreover, we prove that our HomeAdam(W) have a smaller generalization error $O(\frac{1}{N})$ than $O(\frac{\hat{\rho}^{-2T}}{N})$ of Adam(W)-srf, since $\hat{\rho}$ is generally very small. In particular, it is also smaller than the existing $O(\frac{1}{\sqrt{N}})$ of Adam(W). Meanwhile, we prove our HomeAdam(W) have a faster convergence rate of $O(\frac{1}{T^{1/4}})$ than $O(\frac{\breve{\rho}^{-1}}{T^{1/4}})$ of the Adam(W)-srf, where $\breve{\rho}\leq\hat{\rho}$ also is very small. Extensive numerical experiments demonstrate efficiency of our HomeAdam(W) algorithms.

顶级标签: machine learning model training theory
详细标签: optimization algorithms generalization error adam optimizer algorithmic stability convergence analysis 或 搜索:

HomeAdam:Adam与AdamW算法有时“回家”以获得更好的可证明泛化性能 / HomeAdam: Adam and AdamW Algorithms Sometimes Go Home to Obtain Better Provable Generalization


1️⃣ 一句话总结

这篇论文提出了一种名为HomeAdam的新优化算法,它通过让Adam/W算法在某些步骤中切换回类似SGD的动量更新,在理论上同时实现了比原始Adam更快的收敛速度和更优的泛化性能。

源自 arXiv: 2603.02649