VisionCoach:通过视觉感知提示强化基于视频的推理 / VisionCoach: Reinforcing Grounded Video Reasoning via Visual-Perception Prompting
1️⃣ 一句话总结
这篇论文提出了一种名为VisionCoach的新方法,通过在训练时自适应地使用视觉提示来引导模型关注视频中与问题相关的关键信息,从而显著提升了模型在视频推理任务中定位和追踪目标的能力,并且最终模型在推理时无需额外提示即可高效运行。
Video reasoning requires models to locate and track question-relevant evidence across frames. While reinforcement learning (RL) with verifiable rewards improves accuracy, it still struggles to achieve reliable spatio-temporal grounding during the reasoning process. Moreover, improving grounding typically relies on scaled training data or inference-time perception tools, which increases annotation cost or computational cost. To address this challenge, we propose VisonCoach, an input-adaptive RL framework that improves spatio-temporal grounding through visual prompting as training-time guidance. During RL training, visual prompts are selectively applied to challenging inputs to amplify question-relevant evidence and suppress distractors. The model then internalizes these improvements through self-distillation, enabling grounded reasoning directly on raw videos without visual prompting at inference. VisonCoach consists of two components: (1) Visual Prompt Selector, which predicts appropriate prompt types conditioned on the video and question, and (2) Spatio-Temporal Reasoner, optimized with RL under visual prompt guidance and object-aware grounding rewards that enforce object identity consistency and multi-region bounding-box overlap. Extensive experiments demonstrate that VisonCoach achieves state-of-the-art performance under comparable settings, across diverse video reasoning, video understanding, and temporal grounding benchmarks (V-STAR, VideoMME, World-Sense, VideoMMMU, PerceptionTest, and Charades-STA), while maintaining a single efficient inference pathway without external tools. Our results show that visual prompting during training improves grounded video reasoning, while self-distillation enables the model to internalize this ability without requiring prompts at inference time.
VisionCoach:通过视觉感知提示强化基于视频的推理 / VisionCoach: Reinforcing Grounded Video Reasoning via Visual-Perception Prompting
这篇论文提出了一种名为VisionCoach的新方法,通过在训练时自适应地使用视觉提示来引导模型关注视频中与问题相关的关键信息,从而显著提升了模型在视频推理任务中定位和追踪目标的能力,并且最终模型在推理时无需额外提示即可高效运行。
源自 arXiv: 2603.14659