📄
Abstract - Does "Do Differentiable Simulators Give Better Policy Gradients?'' Give Better Policy Gradients?
In policy gradient reinforcement learning, access to a differentiable model enables 1st-order gradient estimation that accelerates learning compared to relying solely on derivative-free 0th-order estimators. However, discontinuous dynamics cause bias and undermine the effectiveness of 1st-order estimators. Prior work addressed this bias by constructing a confidence interval around the REINFORCE 0th-order gradient estimator and using these bounds to detect discontinuities. However, the REINFORCE estimator is notoriously noisy, and we find that this method requires task-specific hyperparameter tuning and has low sample efficiency. This paper asks whether such bias is the primary obstacle and what minimal fixes suffice. First, we re-examine standard discontinuous settings from prior work and introduce DDCG, a lightweight test that switches estimators in nonsmooth regions; with a single hyperparameter, DDCG achieves robust performance and remains reliable with small samples. Second, on differentiable robotics control tasks, we present IVW-H, a per-step inverse-variance implementation that stabilizes variance without explicit discontinuity detection and yields strong results. Together, these findings indicate that while estimator switching improves robustness in controlled studies, careful variance control often dominates in practical deployments.
“可微分模拟器能提供更好的策略梯度吗?”真的能提供更好的策略梯度吗? /
Does "Do Differentiable Simulators Give Better Policy Gradients?'' Give Better Policy Gradients?
1️⃣ 一句话总结
这篇论文研究发现,在强化学习中,虽然利用可微分模型的一阶梯度估计能加速学习,但环境动态的不连续性会引入偏差;作者通过提出两种轻量级方法(DDCG和IVW-H)证明,相比复杂的间断检测,简单的估计器切换和精细的方差控制往往在实际任务中更为关键和有效。