JEPA-VLA:视觉语言动作模型需要视频预测性嵌入 / JEPA-VLA: Video Predictive Embedding is Needed for VLA Models
1️⃣ 一句话总结
这篇论文提出,通过在现有视觉语言动作模型中引入一种从视频中预训练得到的、能预测环境动态的视觉表示,可以显著提升机器人操作任务的学习效率和泛化能力。
Recent vision-language-action (VLA) models built upon pretrained vision-language models (VLMs) have achieved significant improvements in robotic manipulation. However, current VLAs still suffer from low sample efficiency and limited generalization. This paper argues that these limitations are closely tied to an overlooked component, pretrained visual representation, which offers insufficient knowledge on both aspects of environment understanding and policy prior. Through an in-depth analysis, we find that commonly used visual representations in VLAs, whether pretrained via language-image contrastive learning or image-based self-supervised learning, remain inadequate at capturing crucial, task-relevant environment information and at inducing effective policy priors, i.e., anticipatory knowledge of how the environment evolves under successful task execution. In contrast, we discover that predictive embeddings pretrained on videos, in particular V-JEPA 2, are adept at flexibly discarding unpredictable environment factors and encoding task-relevant temporal dynamics, thereby effectively compensating for key shortcomings of existing visual representations in VLAs. Building on these observations, we introduce JEPA-VLA, a simple yet effective approach that adaptively integrates predictive embeddings into existing VLAs. Our experiments demonstrate that JEPA-VLA yields substantial performance gains across a range of benchmarks, including LIBERO, LIBERO-plus, RoboTwin2.0, and real-robot tasks.
JEPA-VLA:视觉语言动作模型需要视频预测性嵌入 / JEPA-VLA: Video Predictive Embedding is Needed for VLA Models
这篇论文提出,通过在现有视觉语言动作模型中引入一种从视频中预训练得到的、能预测环境动态的视觉表示,可以显著提升机器人操作任务的学习效率和泛化能力。
源自 arXiv: 2602.11832