菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-06
📄 Abstract - Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions

This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass linear evaluation significantly reduces computational complexity and variance. We prove that this framework preserves fundamental contraction properties and ensures stable generalisation even in the presence of heavy-tailed noise. Our results demonstrate that by grounding reinforcement learning in the topological features of path-space, agents can achieve proactive risk management and superior policy stability in highly volatile, continuous-time environments.

顶级标签: reinforcement learning theory model training
详细标签: non-markovian decision processes path-dependent geometry signature-augmented manifold distributional value functions continuous-time environments 或 搜索:

预期强化学习:从生成路径法则到分布价值函数 / Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions


1️⃣ 一句话总结

这篇论文提出了一种名为‘预期强化学习’的新方法,它通过将历史路径信息编码到状态空间中,让智能体在复杂多变的环境中能提前预测未来趋势,从而以更低的计算成本实现更稳定、更主动的风险控制决策。

源自 arXiv: 2604.04662