菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-03
📄 Abstract - Adversarial Imitation Learning with General Function Approximation: Theoretical Analysis and Practical Algorithms

Adversarial imitation learning (AIL), a prominent approach in imitation learning, has achieved significant practical success powered by neural network approximation. However, existing theoretical analyses of AIL are primarily confined to simplified settings, such as tabular and linear function approximation, and involve complex algorithmic designs that impede practical implementation. This creates a substantial gap between theory and practice. This paper bridges this gap by exploring the theoretical underpinnings of online AIL with general function approximation. We introduce a novel framework called optimization-based AIL (OPT-AIL), which performs online optimization for reward learning coupled with optimism-regularized optimization for policy learning. Within this framework, we develop two concrete methods: model-free OPT-AIL and model-based OPT-AIL. Our theoretical analysis demonstrates that both variants achieve polynomial expert sample complexity and interaction complexity for learning near-expert policies. To the best of our knowledge, they represent the first provably efficient AIL methods under general function approximation. From a practical standpoint, OPT-AIL requires only the approximate optimization of two objectives, thereby facilitating practical implementation. Empirical studies demonstrate that OPT-AIL outperforms previous state-of-the-art deep AIL methods across several challenging tasks.

顶级标签: reinforcement learning machine learning
详细标签: imitation learning adversarial imitation learning general function approximation online optimization sample complexity 或 搜索:

基于通用函数逼近的对抗式模仿学习:理论分析与实用算法 / Adversarial Imitation Learning with General Function Approximation: Theoretical Analysis and Practical Algorithms


1️⃣ 一句话总结

本文提出一种名为OPT-AIL的对抗式模仿学习新框架,通过将奖励学习与策略学习分别进行在线优化和乐观正则化优化,首次在通用函数逼近条件下实现了理论上的高效性,并且方法简单、易于实现,实验表明其性能优于现有深度学习模仿学习方法。

源自 arXiv: 2605.01778