菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - HopChain: Multi-Hop Data Synthesis for Generalizable Vision-Language Reasoning

Vision-language models (VLMs) show strong multimodal capabilities but still struggle with fine-grained vision-language reasoning. We find that long chain-of-thought (CoT) reasoning exposes diverse failure modes, including perception, reasoning, knowledge, and hallucination errors, which can compound across intermediate steps. However, most existing vision-language data used for reinforcement learning with verifiable rewards (RLVR) does not involve complex reasoning chains that rely on visual evidence throughout, leaving these weaknesses largely unexposed. We therefore propose HopChain, a scalable framework for synthesizing multi-hop vision-language reasoning data for RLVR training of VLMs. Each synthesized multi-hop query forms a logically dependent chain of instance-grounded hops, where earlier hops establish the instances, sets, or conditions needed for later hops, while the final answer remains a specific, unambiguous number suitable for verifiable rewards. We train Qwen3.5-35B-A3B and Qwen3.5-397B-A17B under two RLVR settings: the original data alone, and the original data plus HopChain's multi-hop data, and compare them across 24 benchmarks spanning STEM and Puzzle, General VQA, Text Recognition and Document Understanding, and Video Understanding. Although this multi-hop data is not synthesized for any specific benchmark, it improves 20 of 24 benchmarks on both models, indicating broad and generalizable gains. Consistently, replacing full chained queries with half-multi-hop or single-hop variants reduces the average score across five representative benchmarks from 70.4 to 66.7 and 64.3, respectively. Notably, multi-hop gains peak in long-CoT vision-language reasoning, exceeding 50 points in the ultra-long-CoT regime. These experiments establish HopChain as an effective, scalable framework for synthesizing multi-hop data that improves generalizable vision-language reasoning.

顶级标签: multi-modal model training vision-language models
详细标签: chain-of-thought data synthesis reasoning reinforcement learning benchmark 或 搜索:

HopChain:用于提升视觉语言推理泛化能力的多跳数据合成框架 / HopChain: Multi-Hop Data Synthesis for Generalizable Vision-Language Reasoning


1️⃣ 一句话总结

这篇论文提出了一个名为HopChain的框架,通过自动合成需要多步逻辑推理的视觉语言数据来训练模型,有效提升了视觉语言模型在复杂、长链条推理任务上的泛化能力。

源自 arXiv: 2603.17024