缓解基于参考的偏好优化中的不匹配问题 / Mitigating Mismatch within Reference-based Preference Optimization
1️⃣ 一句话总结
这篇论文提出了一种名为HyPO的改进方法,通过有条件地调整参考模型在训练中的作用,解决了现有偏好优化算法在处理‘悲观’数据时过早停止学习的问题,从而在保持训练稳定的同时提升了模型的最终性能。
Direct Preference Optimization (DPO) has become the de facto standard for offline preference alignment of large language models, but its reliance on a reference policy introduces a critical tension. DPO weighs each update relative to a reference, which stabilizes the training by regularizing the updates within a trusted region. This reliance becomes problematic for pessimistic pairs, where the reference model prefers the rejected response. For these pairs, DPO prematurely attenuates the gradient as soon as the policy margin ($\Delta_\theta$) merely beats the reference margin ($\Delta_{\mathrm{ref}}$) even if the policy is still wrong ($\Delta_{\theta}<0$). We name this failure premature satisfaction, which is a concrete form of the training-inference mismatch. Reference-free objectives remove this mismatch by optimizing the absolute margin, but at the cost of discarding the stabilizing signal of the reference. We mitigate this tension with Hybrid-DPO (HyPO), a drop-in modification to DPO that applies reference conditionally: HyPO behaves exactly like DPO when the reference is optimistic or neutral, and it treats the reference as neutral when it is pessimistic by replacing $\Delta_\theta-\Delta_{\mathrm{ref}}$ with $\Delta_\theta-\max\{0,\Delta_{\mathrm{ref}}\}$. This one-line change strictly strengthens per-example learning signals on pessimistic pairs while preserving DPO's objective form and computational cost. By conditionally debiasing the pessimistic reference signal, HyPO mitigates premature satisfaction; empirically, across preference alignment, HyPO improves inference-aligned metrics and achieves higher pairwise win rates. Our results provide evidence that direct preference alignment could be enhanced by conditionally debiasing the reference signal, rather than discarding it.
缓解基于参考的偏好优化中的不匹配问题 / Mitigating Mismatch within Reference-based Preference Optimization
这篇论文提出了一种名为HyPO的改进方法,通过有条件地调整参考模型在训练中的作用,解决了现有偏好优化算法在处理‘悲观’数据时过早停止学习的问题,从而在保持训练稳定的同时提升了模型的最终性能。
源自 arXiv: 2602.11902