对数障碍函数如何助力策略优化中的探索 / How Log-Barrier Helps Exploration in Policy Optimization
1️⃣ 一句话总结
这篇论文提出了一种在对策略优化目标中加入对数障碍函数的新方法,它能在不增加样本复杂度的前提下,强制算法进行有效探索,从而在更现实的条件下保证收敛到最优策略。
Recently, it has been shown that the Stochastic Gradient Bandit (SGB) algorithm converges to a globally optimal policy with a constant learning rate. However, these guarantees rely on unrealistic assumptions about the learning process, namely that the probability of the optimal action is always bounded away from zero. We attribute this to the lack of an explicit exploration mechanism in SGB. To address these limitations, we propose to regularize the SGB objective with a log-barrier on the parametric policy, structurally enforcing a minimal amount of exploration. We prove that Log-Barrier Stochastic Gradient Bandit (LB-SGB) matches the sample complexity of SGB, but also converges (at a slower rate) without any assumptions on the learning process. We also show a connection between the log-barrier regularization and Natural Policy Gradient, as both exploit the geometry of the policy space by controlling the Fisher information. We validate our theoretical findings through numerical simulations, showing the benefits of the log-barrier regularization.
对数障碍函数如何助力策略优化中的探索 / How Log-Barrier Helps Exploration in Policy Optimization
这篇论文提出了一种在对策略优化目标中加入对数障碍函数的新方法,它能在不增加样本复杂度的前提下,强制算法进行有效探索,从而在更现实的条件下保证收敛到最优策略。
源自 arXiv: 2603.15001