用于大规模流动控制的强化学习算法即插即用基准测试 / Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control
1️⃣ 一句话总结
这篇论文提出了一个名为FluidGym的、完全可微分的独立基准测试套件,旨在解决强化学习在主动流动控制领域因实验设置不统一而难以公平比较的问题,为未来研究提供了一个标准化的评估平台。
Reinforcement learning (RL) has shown promising results in active flow control (AFC), yet progress in the field remains difficult to assess as existing studies rely on heterogeneous observation and actuation schemes, numerical setups, and evaluation protocols. Current AFC benchmarks attempt to address these issues but heavily rely on external computational fluid dynamics (CFD) solvers, are not fully differentiable, and provide limited 3D and multi-agent support. To overcome these limitations, we introduce FluidGym, the first standalone, fully differentiable benchmark suite for RL in AFC. Built entirely in PyTorch on top of the GPU-accelerated PICT solver, FluidGym runs in a single Python stack, requires no external CFD software, and provides standardized evaluation protocols. We present baseline results with PPO and SAC and release all environments, datasets, and trained models as public resources. FluidGym enables systematic comparison of control methods, establishes a scalable foundation for future research in learning-based flow control, and is available at this https URL.
用于大规模流动控制的强化学习算法即插即用基准测试 / Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control
这篇论文提出了一个名为FluidGym的、完全可微分的独立基准测试套件,旨在解决强化学习在主动流动控制领域因实验设置不统一而难以公平比较的问题,为未来研究提供了一个标准化的评估平台。
源自 arXiv: 2601.15015