C2:一种基于二元偏好的、可扩展的规则增强奖励建模方法 / C2: Scalable Rubric-Augmented Reward Modeling from Binary Preferences
1️⃣ 一句话总结
本文提出了一种名为C2的新框架,它通过让奖励模型与一个仅从二元偏好数据训练出的规则生成器进行‘批判性合作’,无需额外的人工标注就能生成更可靠的评估规则,从而显著提升了奖励模型的判断准确性和可扩展性。
Rubric-augmented verification guides reward models with explicit evaluation criteria, yielding more reliable judgments than single-model verification. However, most existing methods require costly rubric annotations, limiting scalability. Moreover, we find that rubric generation is vulnerable to a failure of cooperation; low-quality rubrics actively mislead reward models rather than help. Inspired by the principle of cooperative communication, we propose Cooperative yet Critical reward modeling (C2), a framework that significantly improves reward model judgments by having the reward model critically collaborate with a rubric generator trained solely from binary preferences. In C2, we synthesize helpful and misleading rubric pairs by measuring how each rubric shifts the reward model toward or away from the correct preference. Using these contrastive pairs, we train a cooperative rubric generator to propose helpful rubrics, and a critical verifier to assess rubric validity before making its judgment, following only rubrics it deems helpful at inference time. C2 outperforms reasoning reward models trained on the same binary preferences, with gains of up to 6.5 points on RM-Bench and 6.0 points length-controlled win rate on AlpacaEval 2.0. Without external rubric annotations, C2 enables an 8B reward model to match performance achieved with rubrics from a 4$\times$ larger model. Overall, our work demonstrates that eliciting deliberate cooperation in rubric-augmented verification makes reward models more trustworthy in a scalable way.
C2:一种基于二元偏好的、可扩展的规则增强奖励建模方法 / C2: Scalable Rubric-Augmented Reward Modeling from Binary Preferences
本文提出了一种名为C2的新框架,它通过让奖励模型与一个仅从二元偏好数据训练出的规则生成器进行‘批判性合作’,无需额外的人工标注就能生成更可靠的评估规则,从而显著提升了奖励模型的判断准确性和可扩展性。
源自 arXiv: 2604.13618