菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-09
📄 Abstract - Posterior Sampling Reinforcement Learning with Gaussian Processes for Continuous Control: Sublinear Regret Bounds for Unbounded State Spaces

We analyze the Bayesian regret of the Gaussian process posterior sampling reinforcement learning (GP-PSRL) algorithm. Posterior sampling is an effective heuristic for decision-making under uncertainty that has been used to develop successful algorithms for a variety of continuous control problems. However, theoretical work on GP-PSRL is limited. All known regret bounds either fail to achieve a tight dependence on a kernel-dependent quantity called the maximum information gain or fail to properly account for the fact that the set of possible system states is unbounded. Through a recursive application of the Borell-Tsirelson-Ibragimov-Sudakov inequality, we show that, with high probability, the states actually visited by the algorithm are contained within a ball of near-constant radius. To obtain tight dependence on the maximum information gain, we use the chaining method to control the regret suffered by GP-PSRL. Our main result is a Bayesian regret bound of the order $\widetilde{\mathcal{O}}(H^{3/2}\sqrt{\gamma_{T/H} T})$, where $H$ is the horizon, $T$ is the number of time steps and $\gamma_{T/H}$ is the maximum information gain. With this result, we resolve the limitations with prior theoretical work on PSRL, and provide the theoretical foundation and tools for analyzing PSRL in complex settings.

顶级标签: reinforcement learning theory machine learning
详细标签: bayesian regret gaussian processes posterior sampling continuous control regret bounds 或 搜索:

基于高斯过程的连续控制后验采样强化学习:无界状态空间的次线性遗憾界 / Posterior Sampling Reinforcement Learning with Gaussian Processes for Continuous Control: Sublinear Regret Bounds for Unbounded State Spaces


1️⃣ 一句话总结

这篇论文为一种基于高斯过程进行后验采样的强化学习算法提供了严格的理论分析,证明了即使在状态空间无限的情况下,该算法也能实现次线性的性能遗憾上界,解决了先前理论工作的局限性。

源自 arXiv: 2603.08287