菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - From Average Sensitivity to Small-Loss Regret Bounds under Random-Order Model

We study online learning in the random-order model, where the multiset of loss functions is chosen adversarially but revealed in a uniformly random order. Building on the batch-to-online conversion by Dong and Yoshida (2023), we show that if an offline algorithm admits a $(1+\varepsilon)$-approximation guarantee and the effect of $\varepsilon$ on its average sensitivity is characterized by a function $\varphi(\varepsilon)$, then an adaptive choice of $\varepsilon$ yields a small-loss regret bound of $\tilde O(\varphi^{\star}(\mathrm{OPT}_T))$, where $\varphi^{\star}$ is the concave conjugate of $\varphi$, $\mathrm{OPT}_T$ is the offline optimum over $T$ rounds, and $\tilde O$ hides polylogarithmic factors in $T$. Our method requires no regularity assumptions on loss functions, such as smoothness, and can be viewed as a generalization of the AdaGrad-style tuning applied to the approximation parameter $\varepsilon$. Our result recovers and strengthens the $(1+\varepsilon)$-approximate regret bounds of Dong and Yoshida (2023) and yields small-loss regret bounds for online $k$-means clustering, low-rank approximation, and regression. We further apply our framework to online submodular function minimization using $(1\pm\varepsilon)$-cut sparsifiers of submodular hypergraphs, obtaining a small-loss regret bound of $\tilde O(n^{3/4}(1 + \mathrm{OPT}_T^{3/4}))$, where $n$ is the ground-set size. Our approach sheds light on the power of sparsification and related techniques in establishing small-loss regret bounds in the random-order model.

顶级标签: theory machine learning model evaluation
详细标签: online learning regret bounds random-order model average sensitivity small-loss regret 或 搜索:

从平均敏感度到随机顺序模型下的小损失遗憾界 / From Average Sensitivity to Small-Loss Regret Bounds under Random-Order Model


1️⃣ 一句话总结

这篇论文提出了一种新方法,能将离线算法的近似性能转化为在线学习中的小损失遗憾界,适用于随机顺序数据流,并成功应用于聚类、回归等多个机器学习问题。

源自 arXiv: 2602.09457