基于维纳混沌展开的随机偏微分方程隐变量学习 / Latent-Variable Learning of SPDEs via Wiener Chaos
1️⃣ 一句话总结
这篇论文提出了一种新方法,仅通过观察随机偏微分方程的解,就能自动学习其内在的随机驱动规律,而无需事先知道噪声或初始条件,从而有效捕捉了系统的本质随机性。
We study the problem of learning the law of linear stochastic partial differential equations (SPDEs) with additive Gaussian forcing from spatiotemporal observations. Most existing deep learning approaches either assume access to the driving noise or initial condition, or rely on deterministic surrogate models that fail to capture intrinsic stochasticity. We propose a structured latent-variable formulation that requires only observations of solution realizations and learns the underlying randomly forced dynamics. Our approach combines a spectral Galerkin projection with a truncated Wiener chaos expansion, yielding a principled separation between deterministic evolution and stochastic forcing. This reduces the infinite-dimensional SPDE to a finite system of parametrized ordinary differential equations governing latent temporal dynamics. The latent dynamics and stochastic forcing are jointly inferred through variational learning, allowing recovery of stochastic structure without explicit observation or simulation of noise during training. Empirical evaluation on synthetic data demonstrates state-of-the-art performance under comparable modeling assumptions across bounded and unbounded one-dimensional spatial domains.
基于维纳混沌展开的随机偏微分方程隐变量学习 / Latent-Variable Learning of SPDEs via Wiener Chaos
这篇论文提出了一种新方法,仅通过观察随机偏微分方程的解,就能自动学习其内在的随机驱动规律,而无需事先知道噪声或初始条件,从而有效捕捉了系统的本质随机性。
源自 arXiv: 2602.11794