菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-07
📄 Abstract - Stochastic Auto-conditioned Fast Gradient Methods with Optimal Rates

Achieving optimal rates for stochastic composite convex optimization without prior knowledge of problem parameters remains a central challenge. In the deterministic setting, the auto-conditioned fast gradient method has recently been proposed to attain optimal accelerated rates without line-search procedures or prior knowledge of the Lipschitz smoothness constant, providing a natural prototype for parameter-free acceleration. However, extending this approach to the stochastic setting has proven technically challenging and remains open. Existing parameter-free stochastic methods either fail to achieve accelerated rates or rely on restrictive assumptions, such as bounded domains, bounded gradients, prior knowledge of the iteration horizon, or strictly sub-Gaussian noise. To address these limitations, we propose a stochastic variant of the auto-conditioned fast gradient method, referred to as stochastic AC-FGM. The proposed method is fully adaptive to the Lipschitz constant, the iteration horizon, and the noise level, enabling both adaptive stepsize selection and adaptive mini-batch sizing without line-search procedures. Under standard bounded conditional variance assumptions, we show that stochastic AC-FGM achieves the optimal iteration complexity of $O(1/\sqrt{\varepsilon})$ and the optimal sample complexity of $O(1/\varepsilon^2)$.

顶级标签: machine learning theory model training
详细标签: stochastic optimization convex optimization gradient methods parameter-free adaptive algorithms 或 搜索:

具有最优速率的随机自条件快速梯度方法 / Stochastic Auto-conditioned Fast Gradient Methods with Optimal Rates


1️⃣ 一句话总结

这篇论文提出了一种名为‘随机自条件快速梯度方法’的新算法,它能在无需预先知道问题参数的情况下,自动适应各种条件,以最优的速度解决一类复杂的随机优化问题。

源自 arXiv: 2604.06525