菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-19
📄 Abstract - MARS: Margin-Aware Reward-Modeling with Self-Refinement

Reward modeling is a core component of modern alignment pipelines including RLHF and RLAIF, underpinning policy optimization methods including PPO and TRPO. However, training reliable reward models relies heavily on human-labeled preference data, which is costly and limited, motivating the use of data augmentation. Existing augmentation approaches typically operate at the representation or semantic level and remain agnostic to the reward model's estimation difficulty. In this paper, we propose MARS, an adaptive, margin-aware augmentation and sampling strategy that explicitly targets ambiguous and failure modes of the reward model. Our proposed framework, MARS, concentrates augmentation on low-margin (ambiguous) preference pairs where the reward model is most uncertain, and iteratively refines the training distribution via hard-sample augmentation. We provide theoretical guarantees showing that this strategy increases the average curvature of the loss function hence enhance information and improves conditioning, along with empirical results demonstrating consistent gains over uniform augmentation for robust reward modeling.

顶级标签: model training machine learning theory
详细标签: reward modeling data augmentation rlhf preference learning margin-aware sampling 或 搜索:

MARS:基于边界感知与自我优化的奖励模型构建方法 / MARS: Margin-Aware Reward-Modeling with Self-Refinement


1️⃣ 一句话总结

这篇论文提出了一种名为MARS的新方法,它通过智能识别奖励模型最难判断的模糊样本,并针对性地生成更多类似数据来训练,从而在减少对昂贵人工标注依赖的同时,显著提升了奖励模型的准确性和鲁棒性。

源自 arXiv: 2602.17658