菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-06
📄 Abstract - Learning from Equivalence Queries, Revisited

Modern machine learning systems, such as generative models and recommendation systems, often evolve through a cycle of deployment, user interaction, and periodic model updates. This differs from standard supervised learning frameworks, which focus on loss or regret minimization over a fixed sequence of prediction tasks. Motivated by this setting, we revisit the classical model of learning from equivalence queries, introduced by Angluin (1988). In this model, a learner repeatedly proposes hypotheses and, when a deployed hypothesis is inadequate, receives a counterexample. Under fully adversarial counterexample generation, however, the model can be overly pessimistic. In addition, most prior work assumes a \emph{full-information} setting, where the learner also observes the correct label of the counterexample, an assumption that is not always natural. We address these issues by restricting the environment to a broad class of less adversarial counterexample generators, which we call \emph{symmetric}. Informally, such generators choose counterexamples based only on the symmetric difference between the hypothesis and the target. This class captures natural mechanisms such as random counterexamples (Angluin and Dohrn, 2017; Bhatia, 2021; Chase, Freitag, and Reyzin, 2024), as well as generators that return the simplest counterexample according to a prescribed complexity measure. Within this framework, we study learning from equivalence queries under both full-information and bandit feedback. We obtain tight bounds on the number of learning rounds in both settings and highlight directions for future work. Our analysis combines a game-theoretic view of symmetric adversaries with adaptive weighting methods and minimax arguments.

顶级标签: theory machine learning model evaluation
详细标签: equivalence queries online learning adversarial learning bandit feedback counterexample generation 或 搜索:

从等价查询中学习:再探讨 / Learning from Equivalence Queries, Revisited


1️⃣ 一句话总结

这篇论文重新审视了经典的‘等价查询学习’模型,通过引入一类更贴近实际应用场景、不那么对抗性的‘对称’反例生成机制,并同时考虑完全信息与部分信息反馈,为现代机器学习系统(如生成模型和推荐系统)的周期性更新与部署提供了新的理论分析框架和性能界限。

源自 arXiv: 2604.04535