菜单

🤖 系统
📄 Abstract - Mantis: A Versatile Vision-Language-Action Model with Disentangled Visual Foresight

Recent advances in Vision-Language-Action (VLA) models demonstrate that visual signals can effectively complement sparse action supervisions. However, letting VLA directly predict high-dimensional visual states can distribute model capacity and incur prohibitive training cost, while compressing visual states into more compact supervisory signals inevitably incurs information bottlenecks. Moreover, existing methods often suffer from poor comprehension and reasoning capabilities due to the neglect of language supervision. This paper introduces Mantis, a novel framework featuring a Disentangled Visual Foresight (DVF) to tackle these issues. Specifically, Mantis decouples visual foresight prediction from the backbone with the combination of meta queries and a diffusion Transformer (DiT) head. With the current visual state provided to the DiT via a residual connection, a simple next-state prediction objective enables the meta queries to automatically capture the latent actions that delineate the visual trajectory, and hence boost the learning of explicit actions. The disentanglement reduces the burden of the VLA backbone, enabling it to maintain comprehension and reasoning capabilities through language supervision. Empirically, pretrained on human manipulation videos, robot demonstrations, and image-text pairs, Mantis achieves a 96.7% success rate on LIBERO benchmark after fine-tuning, surpassing powerful baselines while exhibiting high convergence speed. Real-world evaluations show that Mantis outperforms $\pi_{0.5}$, a leading open-source VLA model, particularly in instruction-following capability, generalization to unseen instructions, and reasoning ability. Code and weights are released to support the open-source community.

顶级标签: multi-modal robotics model training
详细标签: vision-language-action visual foresight diffusion transformer robot manipulation instruction following 或 搜索:

📄 论文总结

Mantis:一种具有解耦视觉预见能力的多功能视觉-语言-动作模型 / Mantis: A Versatile Vision-Language-Action Model with Disentangled Visual Foresight


1️⃣ 一句话总结

这篇论文提出了一个名为Mantis的新型视觉-语言-动作模型,它通过解耦视觉预见模块来减轻主干网络的负担,从而在保持强大语言理解和推理能力的同时,显著提升了机器人任务执行的准确性和泛化能力。


📄 打开原文 PDF