GARDO:避免奖励黑客的扩散模型强化方法 / GARDO: Reinforcing Diffusion Models without Reward Hacking
1️⃣ 一句话总结
这篇论文提出了一个名为GARDO的新框架,它通过选择性惩罚高不确定性样本、动态更新参考模型以及奖励高质量且多样化的生成结果,有效解决了扩散模型在强化学习微调中常见的奖励黑客、探索不足和模式崩溃问题,从而在不牺牲效率的前提下提升了图像生成的质量和多样性。
Fine-tuning diffusion models via online reinforcement learning (RL) has shown great potential for enhancing text-to-image alignment. However, since precisely specifying a ground-truth objective for visual tasks remains challenging, the models are often optimized using a proxy reward that only partially captures the true goal. This mismatch often leads to reward hacking, where proxy scores increase while real image quality deteriorates and generation diversity collapses. While common solutions add regularization against the reference policy to prevent reward hacking, they compromise sample efficiency and impede the exploration of novel, high-reward regions, as the reference policy is usually sub-optimal. To address the competing demands of sample efficiency, effective exploration, and mitigation of reward hacking, we propose Gated and Adaptive Regularization with Diversity-aware Optimization (GARDO), a versatile framework compatible with various RL algorithms. Our key insight is that regularization need not be applied universally; instead, it is highly effective to selectively penalize a subset of samples that exhibit high uncertainty. To address the exploration challenge, GARDO introduces an adaptive regularization mechanism wherein the reference model is periodically updated to match the capabilities of the online policy, ensuring a relevant regularization target. To address the mode collapse issue in RL, GARDO amplifies the rewards for high-quality samples that also exhibit high diversity, encouraging mode coverage without destabilizing the optimization process. Extensive experiments across diverse proxy rewards and hold-out unseen metrics consistently show that GARDO mitigates reward hacking and enhances generation diversity without sacrificing sample efficiency or exploration, highlighting its effectiveness and robustness.
GARDO:避免奖励黑客的扩散模型强化方法 / GARDO: Reinforcing Diffusion Models without Reward Hacking
这篇论文提出了一个名为GARDO的新框架,它通过选择性惩罚高不确定性样本、动态更新参考模型以及奖励高质量且多样化的生成结果,有效解决了扩散模型在强化学习微调中常见的奖励黑客、探索不足和模式崩溃问题,从而在不牺牲效率的前提下提升了图像生成的质量和多样性。
源自 arXiv: 2512.24138