菜单

🤖 系统
📄 Abstract - Don't Blind Your VLA: Aligning Visual Representations for OOD Generalization

The growing success of Vision-Language-Action (VLA) models stems from the promise that pretrained Vision-Language Models (VLMs) can endow agents with transferable world knowledge and vision-language (VL) grounding, laying a foundation for action models with broader generalization. Yet when these VLMs are adapted to the action modality, it remains unclear to what extent their original VL representations and knowledge are preserved. In this work, we conduct a systematic study of representation retention during VLA fine-tuning, showing that naive action fine-tuning leads to degradation of visual representations. To characterize and measure these effects, we probe VLA's hidden representations and analyze attention maps, further, we design a set of targeted tasks and methods that contrast VLA models with their counterpart VLMs, isolating changes in VL capabilities induced by action fine-tuning. We further evaluate a range of strategies for aligning visual representations and introduce a simple yet effective method that mitigates degradation and yields improved generalization to out-of-distribution (OOD) scenarios. Taken together, our analysis clarifies the trade-off between action fine-tuning and the degradation of VL representations and highlights practical approaches to recover inherited VL capabilities. Code is publicly available: this https URL

顶级标签: multi-modal model training model evaluation
详细标签: vision-language-action representation alignment fine-tuning degradation out-of-distribution generalization visual representation preservation 或 搜索:

📄 论文总结

不要蒙蔽你的视觉语言动作模型:对齐视觉表征以提升分布外泛化能力 / Don't Blind Your VLA: Aligning Visual Representations for OOD Generalization


1️⃣ 一句话总结

这项研究发现,在将视觉语言模型微调为视觉语言动作模型时,简单的动作微调会损害原有的视觉理解能力,并提出了一种简单有效的方法来保持视觉表征质量,从而提升模型在未知场景下的泛化性能。


📄 打开原文 PDF