STEP3-VL-10B 技术报告 / STEP3-VL-10B Technical Report
1️⃣ 一句话总结
这篇论文介绍了一个名为STEP3-VL-10B的轻量级开源多模态基础模型,它通过创新的训练策略和推理方法,在仅100亿参数的小体积下,实现了媲美甚至超越某些超大规模或顶尖商业模型的多模态理解和复杂推理能力。
We present STEP3-VL-10B, a lightweight open-source foundation model designed to redefine the trade-off between compact efficiency and frontier-level multimodal intelligence. STEP3-VL-10B is realized through two strategic shifts: first, a unified, fully unfrozen pre-training strategy on 1.2T multimodal tokens that integrates a language-aligned Perception Encoder with a Qwen3-8B decoder to establish intrinsic vision-language synergy; and second, a scaled post-training pipeline featuring over 1k iterations of reinforcement learning. Crucially, we implement Parallel Coordinated Reasoning (PaCoRe) to scale test-time compute, allocating resources to scalable perceptual reasoning that explores and synthesizes diverse visual hypotheses. Consequently, despite its compact 10B footprint, STEP3-VL-10B rivals or surpasses models 10$\times$-20$\times$ larger (e.g., GLM-4.6V-106B, Qwen3-VL-235B) and top-tier proprietary flagships like Gemini 2.5 Pro and Seed-1.5-VL. Delivering best-in-class performance, it records 92.2% on MMBench and 80.11% on MMMU, while excelling in complex reasoning with 94.43% on AIME2025 and 75.95% on MathVision. We release the full model suite to provide the community with a powerful, efficient, and reproducible baseline.
STEP3-VL-10B 技术报告 / STEP3-VL-10B Technical Report
这篇论文介绍了一个名为STEP3-VL-10B的轻量级开源多模态基础模型,它通过创新的训练策略和推理方法,在仅100亿参数的小体积下,实现了媲美甚至超越某些超大规模或顶尖商业模型的多模态理解和复杂推理能力。
源自 arXiv: 2601.09668