非线性抛物型偏微分方程中Picard型算子学习的泛化误差界 / Generalization Error Bounds for Picard-Type Operator Learning in Nonlinear Parabolic PDEs
1️⃣ 一句话总结
本文针对非线性抛物型偏微分方程,提出了一种基于Duhamel-Picard迭代的算子学习框架,通过理论分析证明了增加迭代步数能降低截断误差且不会无限放大估计误差,并给出了模型泛化误差的严格界限。
Operator learning for partial differential equations (PDEs) aims to learn solution operators on infinite-dimensional function spaces from finite-resolution data. In this setting, it is important for the learned model to be discretization-invariant, or resolution-robust, and to reflect PDE-specific structure. It is therefore natural to ask how such structure should be encoded in the model architecture, hypothesis class, or learning procedure. In this paper, we study operator learning for solution operators of nonlinear parabolic PDEs based on Duhamel--Picard iteration. We formulate Picard iteration as an abstract state-transition model and present a theoretical framework for Picard-type operator learning. We derive implementation-agnostic generalization error bounds that separate the implementation error from the estimation error associated with the abstract state-transition model induced by Picard iteration. A key consequence is that increasing the Picard depth reduces the Picard truncation error without causing an unbounded growth of the entropy-based estimation error. We also extend the analysis to long-time prediction by rolling out the same learned local model over successive time blocks. Finally, we illustrate the theory for nonlinear heat equations on the torus using a Picard-type Fourier neural operator as a concrete implementation.
非线性抛物型偏微分方程中Picard型算子学习的泛化误差界 / Generalization Error Bounds for Picard-Type Operator Learning in Nonlinear Parabolic PDEs
本文针对非线性抛物型偏微分方程,提出了一种基于Duhamel-Picard迭代的算子学习框架,通过理论分析证明了增加迭代步数能降低截断误差且不会无限放大估计误差,并给出了模型泛化误差的严格界限。
源自 arXiv: 2605.10277