关于储层计算网络中主导流形的研究 / On Dominant Manifolds in Reservoir Computing Networks
1️⃣ 一句话总结
这篇论文通过分析一个简化的线性储层计算模型,揭示了网络训练如何根据数据的内在特性形成低维主导流形,并建立了储层计算与动态模式分解算法之间的理论联系,为理解循环神经网络如何学习时间序列提供了新的视角。
Understanding how training shapes the geometry of recurrent network dynamics is a central problem in time-series modeling. We study the emergence of low-dimensional dominant manifolds in the training of Reservoir Computing (RC) networks for temporal forecasting tasks. For a simplified linear and continuous-time reservoir model, we link the dimensionality and structure of the dominant modes directly to the intrinsic dimensionality and information content of the training data. In particular, for training data generated by an autonomous dynamical system, we relate the dominant modes of the trained reservoir to approximations of the Koopman eigenfunctions of the original system, illuminating an explicit connection between reservoir computing and the Dynamic Mode Decomposition algorithm. We illustrate the eigenvalue motion that generates the dominant manifolds during training in simulation, and discuss generalization to nonlinear RC via tangent dynamics and differential p-dominance.
关于储层计算网络中主导流形的研究 / On Dominant Manifolds in Reservoir Computing Networks
这篇论文通过分析一个简化的线性储层计算模型,揭示了网络训练如何根据数据的内在特性形成低维主导流形,并建立了储层计算与动态模式分解算法之间的理论联系,为理解循环神经网络如何学习时间序列提供了新的视角。
源自 arXiv: 2604.05967