通过强化学习引导的网格优化加速偏微分方程代理模型训练 / Accelerating PDE Surrogates via RL-Guided Mesh Optimization
1️⃣ 一句话总结
这篇论文提出了一种名为RLMesh的新方法,它利用强化学习智能地为偏微分方程模拟动态分配计算资源,将网格点集中在最关键的求解区域,从而在保证精度的前提下,大幅减少了训练深度学习代理模型所需的高成本模拟次数。
Deep surrogate models for parametric partial differential equations (PDEs) can deliver high-fidelity approximations but remain prohibitively data-hungry: training often requires thousands of fine-grid simulations, each incurring substantial computational cost. To address this challenge, we introduce RLMesh, an end-to-end framework for efficient surrogate training under limited simulation budget. The key idea is to use reinforcement learning (RL) to adaptively allocate mesh grid points non-uniformly within each simulation domain, focusing numerical resolution in regions most critical for accurate PDE solutions. A lightweight proxy model further accelerates RL training by providing efficient reward estimates without full surrogate retraining. Experiments on PDE benchmarks demonstrate that RLMesh achieves competitive accuracy to baselines but with substantially fewer simulation queries. These results show that solver-level spatial adaptivity can dramatically improve the efficiency of surrogate training pipelines, enabling practical deployment of learning-based PDE surrogates across a wide range of problems.
通过强化学习引导的网格优化加速偏微分方程代理模型训练 / Accelerating PDE Surrogates via RL-Guided Mesh Optimization
这篇论文提出了一种名为RLMesh的新方法,它利用强化学习智能地为偏微分方程模拟动态分配计算资源,将网格点集中在最关键的求解区域,从而在保证精度的前提下,大幅减少了训练深度学习代理模型所需的高成本模拟次数。
源自 arXiv: 2603.02066