通过规划对齐智能体:一个用于轨迹级奖励建模的基准 / Aligning Agents via Planning: A Benchmark for Trajectory-Level Reward Modeling
1️⃣ 一句话总结
这篇论文提出了一个名为Plan-RewardBench的新基准测试,专门用于评估和改进智能体在复杂任务中(如使用工具和规划)的奖励模型,发现现有模型在长序列任务上表现不佳,强调了开发专门训练方法的必要性。
In classical Reinforcement Learning from Human Feedback (RLHF), Reward Models (RMs) serve as the fundamental signal provider for model alignment. As Large Language Models evolve into agentic systems capable of autonomous tool invocation and complex reasoning, the paradigm of reward modeling faces unprecedented challenges--most notably, the lack of benchmarks specifically designed to assess RM capabilities within tool-integrated environments. To address this gap, we present Plan-RewardBench, a trajectory-level preference benchmark designed to evaluate how well judges distinguish preferred versus distractor agent trajectories in complex tool-using scenarios. Plan-RewardBench covers four representative task families -- (i) Safety Refusal, (ii) Tool-Irrelevance / Unavailability, (iii) Complex Planning, and (iv) Robust Error Recovery -- comprising validated positive trajectories and confusable hard negatives constructed via multi-model natural rollouts, rule-based perturbations, and minimal-edit LLM perturbations. We benchmark representative RMs (generative, discriminative, and LLM-as-Judge) under a unified pairwise protocol, reporting accuracy trends across varying trajectory lengths and task categories. Furthermore, we provide diagnostic analyses of prevalent failure modes. Our results reveal that all three evaluator families face substantial challenges, with performance degrading sharply on long-horizon trajectories, underscoring the necessity for specialized training in agentic, trajectory-level reward modeling. Ultimately, Plan-RewardBench aims to serve as both a practical evaluation suite and a reusable blueprint for constructing agentic planning preference data.
通过规划对齐智能体:一个用于轨迹级奖励建模的基准 / Aligning Agents via Planning: A Benchmark for Trajectory-Level Reward Modeling
这篇论文提出了一个名为Plan-RewardBench的新基准测试,专门用于评估和改进智能体在复杂任务中(如使用工具和规划)的奖励模型,发现现有模型在长序列任务上表现不佳,强调了开发专门训练方法的必要性。
源自 arXiv: 2604.08178