菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-26
📄 Abstract - Learning Physical Operators using Neural Operators

Neural operators have emerged as promising surrogate models for solving partial differential equations (PDEs), but struggle to generalise beyond training distributions and are often constrained to a fixed temporal discretisation. This work introduces a physics-informed training framework that addresses these limitations by decomposing PDEs using operator splitting methods, training separate neural operators to learn individual non-linear physical operators while approximating linear operators with fixed finite-difference convolutions. This modular mixture-of-experts architecture enables generalisation to novel physical regimes by explicitly encoding the underlying operator structure. We formulate the modelling task as a neural ordinary differential equation (ODE) where these learned operators constitute the right-hand side, enabling continuous-in-time predictions through standard ODE solvers and implicitly enforcing PDE constraints. Demonstrated on incompressible and compressible Navier-Stokes equations, our approach achieves better convergence and superior performance when generalising to unseen physics. The method remains parameter-efficient, enabling temporal extrapolation beyond training horizons, and provides interpretable components whose behaviour can be verified against known physics.

顶级标签: machine learning model training theory
详细标签: neural operators partial differential equations physics-informed operator splitting neural ode 或 搜索:

使用神经算子学习物理算子 / Learning Physical Operators using Neural Operators


1️⃣ 一句话总结

这篇论文提出了一种新的物理信息训练框架,通过将偏微分方程分解为线性和非线性算子,并分别用固定卷积和可训练的神经算子来学习,从而构建了一个模块化、可解释且能泛化到新物理场景的连续时间预测模型。

源自 arXiv: 2602.23113