菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - Zero-Shot Off-Policy Learning

Off-policy learning methods seek to derive an optimal policy directly from a fixed dataset of prior interactions. This objective presents significant challenges, primarily due to the inherent distributional shift and value function overestimation bias. These issues become even more noticeable in zero-shot reinforcement learning, where an agent trained on reward-free data must adapt to new tasks at test time without additional training. In this work, we address the off-policy problem in a zero-shot setting by discovering a theoretical connection of successor measures to stationary density ratios. Using this insight, our algorithm can infer optimal importance sampling ratios, effectively performing a stationary distribution correction with an optimal policy for any task on the fly. We benchmark our method in motion tracking tasks on SMPL Humanoid, continuous control on ExoRL, and for the long-horizon OGBench tasks. Our technique seamlessly integrates into forward-backward representation frameworks and enables fast-adaptation to new tasks in a training-free regime. More broadly, this work bridges off-policy learning and zero-shot adaptation, offering benefits to both research areas.

顶级标签: reinforcement learning theory model evaluation
详细标签: off-policy learning zero-shot adaptation successor measures stationary distribution correction importance sampling 或 搜索:

零样本离线策略学习 / Zero-Shot Off-Policy Learning


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过建立后继度量与稳态密度比的理论联系,能够直接从已有的无奖励数据中快速推断出适应新任务的最优策略,无需额外训练,有效解决了离线策略学习中的分布偏移和估值偏差问题,并在多个机器人控制任务中验证了其有效性。

源自 arXiv: 2602.01962