菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - Online Learning and Equilibrium Computation with Ranking Feedback

Online learning in arbitrary, and possibly adversarial, environments has been extensively studied in sequential decision-making, and it is closely connected to equilibrium computation in game theory. Most existing online learning algorithms rely on \emph{numeric} utility feedback from the environment, which may be unavailable in human-in-the-loop applications and/or may be restricted by privacy concerns. In this paper, we study an online learning model in which the learner only observes a \emph{ranking} over a set of proposed actions at each timestep. We consider two ranking mechanisms: rankings induced by the \emph{instantaneous} utility at the current timestep, and rankings induced by the \emph{time-average} utility up to the current timestep, under both \emph{full-information} and \emph{bandit} feedback settings. Using the standard external-regret metric, we show that sublinear regret is impossible with instantaneous-utility ranking feedback in general. Moreover, when the ranking model is relatively deterministic, \emph{i.e.}, under the Plackett-Luce model with a temperature that is sufficiently small, sublinear regret is also impossible with time-average utility ranking feedback. We then develop new algorithms that achieve sublinear regret under the additional assumption that the utility sequence has sublinear total variation. Notably, for full-information time-average utility ranking feedback, this additional assumption can be removed. As a consequence, when all players in a normal-form game follow our algorithms, repeated play yields an approximate coarse correlated equilibrium. We also demonstrate the effectiveness of our algorithms in an online large-language-model routing task.

顶级标签: theory machine learning agents
详细标签: online learning ranking feedback regret analysis equilibrium computation game theory 或 搜索:

基于排序反馈的在线学习与均衡计算 / Online Learning and Equilibrium Computation with Ranking Feedback


1️⃣ 一句话总结

这篇论文研究了一种新的在线学习场景,其中学习者只能看到不同决策的排名顺序而非具体数值反馈,并设计了在特定条件下能有效学习并最终达成博弈均衡的新算法。

源自 arXiv: 2603.19221