菜单

🤖 系统
📄 Abstract - VLA-4D: Embedding 4D Awareness into Vision-Language-Action Models for SpatioTemporally Coherent Robotic Manipulation

Vision-language-action (VLA) models show potential for general robotic tasks, but remain challenging in spatiotemporally coherent manipulation, which requires fine-grained representations. Typically, existing methods embed 3D positions into visual representations to enhance the spatial precision of actions. However, these methods struggle to achieve temporally coherent control over action execution. In this work, we propose VLA-4D, a general VLA model with 4D awareness for spatiotemporally coherent robotic manipulation. Our model is guided by two key designs: 1) 4D-aware visual representation. We extract visual features, embed 1D time into 3D positions for 4D embeddings, and fuse them into a unified visual representation via a cross-attention mechanism. 2) Spatiotemporal action representation. We extend conventional spatial action representations with temporal information to enable the spatiotemporal planning, and align the multimodal representations into the LLM for spatiotemporal action prediction. Within this unified framework, the designed visual and action representations jointly make robotic manipulation spatially-smooth and temporally-coherent. In addition, we extend the VLA dataset with temporal action annotations for fine-tuning our model. Extensive experiments have been conducted to verify the superiority of our method across different tasks of robotic manipulation.

顶级标签: robotics multi-modal model training
详细标签: vision-language-action 4d representation spatiotemporal coherence robotic manipulation multimodal fusion 或 搜索:

📄 论文总结

VLA-4D:将四维感知融入视觉-语言-动作模型以实现时空连贯的机器人操作 / VLA-4D: Embedding 4D Awareness into Vision-Language-Action Models for SpatioTemporally Coherent Robotic Manipulation


1️⃣ 一句话总结

这项研究提出了一种新型视觉-语言-动作模型VLA-4D,通过引入时间维度与空间位置融合的四维感知机制,使机器人能够执行更流畅连贯的时空动作规划与操作。


📄 打开原文 PDF