菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-30
📄 Abstract - SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning

While Vision-Language Models (VLMs) can solve complex tasks through agentic reasoning, their capabilities remain largely constrained to text-oriented chain-of-thought or isolated tool invocation. They fail to exhibit the human-like proficiency required to seamlessly interleave dynamic tool manipulation with continuous reasoning, particularly in knowledge-intensive and visually complex scenarios that demand coordinated external tools such as search and image cropping. In this work, we introduce SenseNova-MARS, a novel Multimodal Agentic Reasoning and Search framework that empowers VLMs with interleaved visual reasoning and tool-use capabilities via reinforcement learning (RL). Specifically, SenseNova-MARS dynamically integrates the image search, text search, and image crop tools to tackle fine-grained and knowledge-intensive visual understanding challenges. In the RL stage, we propose the Batch-Normalized Group Sequence Policy Optimization (BN-GSPO) algorithm to improve the training stability and advance the model's ability to invoke tools and reason effectively. To comprehensively evaluate the agentic VLMs on complex visual tasks, we introduce the HR-MMSearch benchmark, the first search-oriented benchmark composed of high-resolution images with knowledge-intensive and search-driven questions. Experiments demonstrate that SenseNova-MARS achieves state-of-the-art performance on open-source search and fine-grained image understanding benchmarks. Specifically, on search-oriented benchmarks, SenseNova-MARS-8B scores 67.84 on MMSearch and 41.64 on HR-MMSearch, surpassing proprietary models such as Gemini-3-Flash and GPT-5. SenseNova-MARS represents a promising step toward agentic VLMs by providing effective and robust tool-use capabilities. To facilitate further research in this field, we will release all code, models, and datasets.

顶级标签: multi-modal agents reinforcement learning
详细标签: vision-language models tool usage agentic reasoning benchmark policy optimization 或 搜索:

SenseNova-MARS:通过强化学习赋能多模态智能体推理与搜索 / SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一个名为SenseNova-MARS的新框架,它通过强化学习教会视觉语言模型像人一样,在解决复杂的视觉问题时,能动态、连贯地交替使用图像搜索、文本搜索和图像裁剪等多种外部工具,从而在知识密集型任务上超越了GPT-5等顶尖模型。

源自 arXiv: 2512.24330