菜单

🤖 系统
📄 Abstract - AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition

Vision-Language Models (VLMs) have achieved remarkable success in visual question answering tasks, but their reliance on large numbers of visual tokens introduces significant computational overhead. While existing efficient VLM approaches reduce visual tokens through fixed-ratio compression, they operate passively and lack the ability to adapt to varying task requirements. This motivates a fundamental question: Can VLMs autonomously determine the minimum number of visual tokens required for each sample? Inspired by human active vision mechanisms, we introduce AdaptVision, an efficient VLM paradigm that enables adaptive visual token acquisition through a coarse-to-fine approach. Our model initially processes compressed visual tokens from low-resolution images and selectively acquires additional visual information by invoking a bounding box tool to crop key regions when necessary. We train AdaptVision using a reinforcement learning framework that carefully balances accuracy and efficiency. Central to our approach is Decoupled Turn Policy Optimization (DTPO), which decouples the learning objective into two components: (1) tool learning, which optimizes correct tool utilization, and (2) accuracy improvement, which refines the generated responses to improve answer correctness. Based on this formulation, we further decouple advantage estimation by computing separate advantages for tokens associated with each objective. This formulation enables more effective optimization for AdaptVision compared to vanilla GRPO. Comprehensive experiments across multiple VQA benchmarks demonstrate that AdaptVision achieves superior performance while consuming substantially fewer visual tokens than state-of-the-art efficient VLM methods.

顶级标签: multi-modal model training agents
详细标签: vision-language models efficient inference reinforcement learning adaptive vision visual token reduction 或 搜索:

AdaptVision:通过自适应视觉采集实现高效视觉语言模型 / AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition


1️⃣ 一句话总结

这篇论文提出了一种名为AdaptVision的高效视觉语言模型新方法,它模仿人类主动视觉机制,能根据任务需求自适应地决定需要处理多少图像信息,从而在保证回答准确性的同时大幅减少计算开销。


📄 打开原文 PDF