用于AI加速科学模拟中神经常微分方程的不变量编译器 / An Invariant Compiler for Neural ODEs in AI-Accelerated Scientific Simulation
1️⃣ 一句话总结
这篇论文提出了一种名为‘不变量编译器’的新方法,它能自动将普通的神经常微分方程模型转换成一种特殊结构,确保模型在模拟物理等科学过程时,其预测轨迹始终遵守能量守恒等基本物理定律,从而得到更可靠、更符合现实的长时预测结果。
Neural ODEs are increasingly used as continuous-time models for scientific and sensor data, but unconstrained neural ODEs can drift and violate domain invariants (e.g., conservation laws), yielding physically implausible solutions. In turn, this can compound error in long-horizon prediction and surrogate simulation. Existing solutions typically aim to enforce invariance by soft penalties or other forms of regularization, which can reduce overall error but do not guarantee that trajectories will not leave the constraint manifold. We introduce the invariant compiler, a framework that enforces invariants by construction: it treats invariants as first-class types and uses an LLM-driven compilation workflow to translate a generic neural ODE specification into a structure-preserving architecture whose trajectories remain on the admissible manifold in continuous time (and up to numerical integration error in practice). This compiler view cleanly separates what must be preserved (scientific structure) from what is learned from data (dynamics within that structure). It provides a systematic design pattern for invariant-respecting neural surrogates across scientific domains.
用于AI加速科学模拟中神经常微分方程的不变量编译器 / An Invariant Compiler for Neural ODEs in AI-Accelerated Scientific Simulation
这篇论文提出了一种名为‘不变量编译器’的新方法,它能自动将普通的神经常微分方程模型转换成一种特殊结构,确保模型在模拟物理等科学过程时,其预测轨迹始终遵守能量守恒等基本物理定律,从而得到更可靠、更符合现实的长时预测结果。
源自 arXiv: 2603.23861