菜单

🤖 系统
📄 Abstract - OneThinker: All-in-one Reasoning Model for Image and Video

Reinforcement learning (RL) has recently achieved remarkable success in eliciting visual reasoning within Multimodal Large Language Models (MLLMs). However, existing approaches typically train separate models for different tasks and treat image and video reasoning as disjoint domains. This results in limited scalability toward a multimodal reasoning generalist, which restricts practical versatility and hinders potential knowledge sharing across tasks and modalities. To this end, we propose OneThinker, an all-in-one reasoning model that unifies image and video understanding across diverse fundamental visual tasks, including question answering, captioning, spatial and temporal grounding, tracking, and segmentation. To achieve this, we construct the OneThinker-600k training corpus covering all these tasks and employ commercial models for CoT annotation, resulting in OneThinker-SFT-340k for SFT cold start. Furthermore, we propose EMA-GRPO to handle reward heterogeneity in multi-task RL by tracking task-wise moving averages of reward standard deviations for balanced optimization. Extensive experiments on diverse visual benchmarks show that OneThinker delivers strong performance on 31 benchmarks, across 10 fundamental visual understanding tasks. Moreover, it exhibits effective knowledge transfer between certain tasks and preliminary zero-shot generalization ability, marking a step toward a unified multimodal reasoning generalist. All code, model, and data are released.

顶级标签: multi-modal model training agents
详细标签: visual reasoning multimodal llm reinforcement learning unified model video understanding 或 搜索:

OneThinker:面向图像与视频的一体化推理模型 / OneThinker: All-in-one Reasoning Model for Image and Video


1️⃣ 一句话总结

这篇论文提出了一个名为OneThinker的统一模型,它能够同时处理图像和视频的多种核心视觉理解任务(如问答、描述、定位和分割),并通过创新的训练方法解决了多任务学习中的奖励不平衡问题,在多个基准测试上表现出色,向通用的多模态推理专家迈进了一步。


📄 打开原文 PDF