菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-04
📄 Abstract - What Does Flow Matching Bring To TD Learning?

Recent work shows that flow matching can be effective for scalar Q-value function estimation in reinforcement learning (RL), but it remains unclear why or how this approach differs from standard critics. Contrary to conventional belief, we show that their success is not explained by distributional RL, as explicitly modeling return distributions can reduce performance. Instead, we argue that the use of integration for reading out values and dense velocity supervision at each step of this integration process for training improves TD learning via two mechanisms. First, it enables robust value prediction through \emph{test-time recovery}, whereby iterative computation through integration dampens errors in early value estimates as more integration steps are performed. This recovery mechanism is absent in monolithic critics. Second, supervising the velocity field at multiple interpolant values induces more \emph{plastic} feature learning within the network, allowing critics to represent non-stationary TD targets without discarding previously learned features or overfitting to individual TD targets encountered during training. We formalize these effects and validate them empirically, showing that flow-matching critics substantially outperform monolithic critics (2$\times$ in final performance and around 5$\times$ in sample efficiency) in settings where loss of plasticity poses a challenge e.g., in high-UTD online RL problems, while remaining stable during learning.

顶级标签: reinforcement learning model training theory
详细标签: flow matching td learning value function plasticity integration 或 搜索:

流匹配为时序差分学习带来了什么? / What Does Flow Matching Bring To TD Learning?


1️⃣ 一句话总结

这篇论文发现,在强化学习中,使用流匹配技术来估计Q值函数之所以有效,并不是因为它能更好地建模回报分布,而是因为它通过积分过程中的测试时误差恢复和更灵活的特征学习这两种机制,显著提升了时序差分学习的稳定性和样本效率。

源自 arXiv: 2603.04333