无约束在线学习的梯度变化后悔界 / Gradient-Variation Regret Bounds for Unconstrained Online Learning
1️⃣ 一句话总结
这篇论文提出了一种新的自适应在线学习算法,它能够根据目标函数梯度在连续轮次中的变化幅度自动调整策略,从而在没有预先知道关键参数的情况下,显著提升了学习性能的理论保证。
We develop parameter-free algorithms for unconstrained online learning with regret guarantees that scale with the gradient variation $V_T(u) = \sum_{t=2}^T \|\nabla f_t(u)-\nabla f_{t-1}(u)\|^2$. For $L$-smooth convex loss, we provide fully-adaptive algorithms achieving regret of order $\widetilde{O}(\|u\|\sqrt{V_T(u)} + L\|u\|^2+G^4)$ without requiring prior knowledge of comparator norm $\|u\|$, Lipschitz constant $G$, or smoothness $L$. The update in each round can be computed efficiently via a closed-form expression. Our results extend to dynamic regret and find immediate implications to the stochastically-extended adversarial (SEA) model, which significantly improves upon the previous best-known result [Wang et al., 2025].
无约束在线学习的梯度变化后悔界 / Gradient-Variation Regret Bounds for Unconstrained Online Learning
这篇论文提出了一种新的自适应在线学习算法,它能够根据目标函数梯度在连续轮次中的变化幅度自动调整策略,从而在没有预先知道关键参数的情况下,显著提升了学习性能的理论保证。
源自 arXiv: 2604.11151