ARM-Thinker:通过智能工具使用与视觉推理增强多模态生成奖励模型 / ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning
1️⃣ 一句话总结
这篇论文提出了一个名为ARM-Thinker的新型智能奖励模型,它能够自主调用外部工具来验证视觉细节和多页文档证据,从而显著提升了多模态任务中奖励判断的准确性和可解释性。
Reward models are critical for aligning vision-language systems with human preferences, yet current approaches suffer from hallucination, weak visual grounding, and an inability to use tools for verification, limiting their reliability on complex multimodal reasoning tasks. We present ARM-Thinker, an A}gentic multimodal Reward Model that autonomously invokes external tools (e.g., image cropping, doc page retrieval) to ground judgments in verifiable evidence, replacing static, non-interactive reward scoring. This enables the model to verify fine-grained visual details, cross-reference multi-page evidence, and validate reasoning claims, which are capabilities absent in existing reward models. We train ARM-Thinker with multi-stage reinforcement learning, jointly optimizing tool-calling decisions and judgment accuracy. To evaluate agentic reward modeling, we introduce ARMBench-VL, comprising three benchmarks that assess fine-grained visual grounding (image-level tools), multi-page document understanding (retrieval tools), and instruction following (text-level verification). ARM-Thinker achieves +16.2% average improvement on reward modeling benchmarks, +9.6% on tool-use tasks, and outperforms baselines on multimodal math and logical reasoning benchmarks. Our results demonstrate that agentic capabilities significantly enhance both accuracy and interpretability of reward models.
ARM-Thinker:通过智能工具使用与视觉推理增强多模态生成奖励模型 / ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning
这篇论文提出了一个名为ARM-Thinker的新型智能奖励模型,它能够自主调用外部工具来验证视觉细节和多页文档证据,从而显著提升了多模态任务中奖励判断的准确性和可解释性。