先看后动:增强视觉-语言-动作模型中的视觉基础表征 / Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models
1️⃣ 一句话总结
这篇论文提出了一种名为DeepVision-VLA的新方法,通过让视觉专家模型与动作生成主干更早、更深地共享视觉信息,并智能过滤无关的视觉细节,显著提升了机器人根据语言指令执行复杂操作任务的准确性和效率。
Vision-Language-Action (VLA) models have recently emerged as a promising paradigm for robotic manipulation, in which reliable action prediction critically depends on accurately interpreting and integrating visual observations conditioned on language instructions. Although recent works have sought to enhance the visual capabilities of VLA models, most approaches treat the LLM backbone as a black box, providing limited insight into how visual information is grounded into action generation. Therefore, we perform a systematic analysis of multiple VLA models across different action-generation paradigms and observe that sensitivity to visual tokens progressively decreases in deeper layers during action generation. Motivated by this observation, we propose \textbf{DeepVision-VLA}, built on a \textbf{Vision-Language Mixture-of-Transformers (VL-MoT)} framework. This framework enables shared attention between the vision foundation model and the VLA backbone, injecting multi-level visual features from the vision expert into deeper layers of the VLA backbone to enhance visual representations for precise and complex manipulation. In addition, we introduce \textbf{Action-Guided Visual Pruning (AGVP)}, which leverages shallow-layer attention to prune irrelevant visual tokens while preserving task-relevant ones, reinforcing critical visual cues for manipulation with minimal computational overhead. DeepVision-VLA outperforms prior state-of-the-art methods by 9.0\% and 7.5\% on simulated and real-world tasks, respectively, providing new insights for the design of visually enhanced VLA models.
先看后动:增强视觉-语言-动作模型中的视觉基础表征 / Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models
这篇论文提出了一种名为DeepVision-VLA的新方法,通过让视觉专家模型与动作生成主干更早、更深地共享视觉信息,并智能过滤无关的视觉细节,显著提升了机器人根据语言指令执行复杂操作任务的准确性和效率。
源自 arXiv: 2603.15618