菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-11
📄 Abstract - Ergodicity in reinforcement learning

In reinforcement learning, we typically aim to optimize the expected value of the sum of rewards an agent collects over a trajectory. However, if the process generating these rewards is non-ergodic, the expected value, i.e., the average over infinitely many trajectories with a given policy, is uninformative for the average over a single, but infinitely long trajectory. Thus, if we care about how the individual agent performs during deployment, the expected value is not a good optimization objective. In this paper, we discuss the impact of non-ergodic reward processes on reinforcement learning agents through an instructive example, relate the notion of ergodic reward processes to more widely used notions of ergodic Markov chains, and present existing solutions that optimize long-term performance of individual trajectories under non-ergodic reward dynamics.

顶级标签: reinforcement learning theory agents
详细标签: ergodicity reward processes policy optimization markov chains trajectory performance 或 搜索:

强化学习中的遍历性 / Ergodicity in reinforcement learning


1️⃣ 一句话总结

这篇论文指出,当强化学习中的奖励过程不具备遍历性时,传统的期望值优化目标无法保证单个智能体的长期表现,并探讨了解决这一问题的现有方法。

源自 arXiv: 2603.10895