菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-21
📄 Abstract - Lyapunov-Certified Direct Switching Theory for Q-Learning

Q-learning is one of the most fundamental algorithms in reinforcement learning. We analyze constant-stepsize Q-learning through a direct stochastic switching system representation. The key observation is that the Bellman maximization error can be represented exactly by a stochastic policy. Therefore, the Q-learning error admits a switched linear conditional-mean recursion with martingale-difference noise. The intrinsic drift rate is the joint spectral radius (JSR) of the direct switching family, which can be strictly smaller than the standard row-sum rate. Using this representation, we derive a finite-time final-iterate bound via a JSR-induced Lyapunov function and then give a computable quadratic-certificate version.

顶级标签: reinforcement learning theory
详细标签: q-learning lyapunov theory switching systems spectral radius finite-time analysis 或 搜索:

基于李雅普诺夫认证的直接切换理论在Q学习中的应用 / Lyapunov-Certified Direct Switching Theory for Q-Learning


1️⃣ 一句话总结

本文提出了一种新的理论框架,通过将Q学习的误差过程建模为一种随机切换系统,并利用联合谱半径和构造的李雅普诺夫函数,证明了在固定步长下Q学习算法的有限时间收敛性,且该方法比传统分析得到更紧的收敛速率上界。

源自 arXiv: 2604.19569