菜单

🤖 系统
📄 Abstract - Look Again, Think Slowly: Enhancing Visual Reflection in Vision-Language Models

Recent advances in text-only "slow-thinking" reasoning have prompted efforts to transfer this capability to vision-language models (VLMs), for training visual reasoning models (\textbf{VRMs}). owever, such transfer faces critical challenges: Effective "slow thinking" in VRMs requires \textbf{visual reflection}, the ability to check the reasoning process based on visual information. Through quantitative analysis, we observe that current VRMs exhibit limited visual reflection, as their attention to visual information diminishes rapidly with longer generated responses. To address this challenge, we propose a new VRM \textbf{Reflection-V}, which enhances visual reflection based on reasoning data construction for cold-start and reward design for reinforcement learning (RL). Firstly, we construct vision-centered reasoning data by leveraging an agent that interacts between VLMs and reasoning LLMs, enabling cold-start learning of visual reflection patterns. Secondly, a visual attention based reward model is employed during RL to encourage reasoning based on visual information. Therefore, \textbf{Reflection-V} demonstrates significant improvements across multiple visual reasoning benchmarks. Furthermore, \textbf{Reflection-V} maintains a stronger and more consistent reliance on visual information during visual reasoning, indicating effective enhancement in visual reflection capabilities.

顶级标签: multi-modal model training agents
详细标签: visual reasoning vision-language models reinforcement learning visual reflection reasoning benchmarks 或 搜索:

📄 论文总结

再看一眼,慢思考:增强视觉语言模型中的视觉反思能力 / Look Again, Think Slowly: Enhancing Visual Reflection in Vision-Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为Reflection-V的新视觉推理模型,通过构建视觉中心推理数据和设计基于视觉注意力的强化学习奖励机制,有效增强了模型在推理过程中持续关注和利用视觉信息的能力,从而显著提升了多个视觉推理任务的性能。


📄 打开原文 PDF