菜单

🤖 系统
📄 Abstract - DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models

Vision-Language-Action (VLA) models trained with flow matching have demonstrated impressive capabilities on robotic manipulation tasks. However, their performance often degrades under distribution shift and on complex multi-step tasks, suggesting that the learned representations may not robustly capture task-relevant semantics. We introduce DiG-Flow, a principled framework that enhances VLA robustness through geometric regularization. Our key insight is that the distributional discrepancy between observation and action embeddings provides a meaningful geometric signal: lower transport cost indicates compatible representations, while higher cost suggests potential misalignment. DiG-Flow computes a discrepancy measure between empirical distributions of observation and action embeddings, maps it to a modulation weight via a monotone function, and applies residual updates to the observation embeddings before flow matching. Crucially, this intervention operates at the representation level without modifying the flow matching path or target vector field. We provide theoretical guarantees showing that discrepancy-guided training provably decreases the training objective, and that guided inference refinement converges with contraction. Empirically, DiG-Flow integrates into existing VLA architectures with negligible overhead and consistently improves performance, with particularly pronounced gains on complex multi-step tasks and under limited training data.

顶级标签: robotics model training multi-modal
详细标签: vision-language-action flow matching geometric regularization distribution shift robust representation 或 搜索:

DiG-Flow:基于差异引导的流匹配方法,用于构建鲁棒的视觉-语言-动作模型 / DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models


1️⃣ 一句话总结

这篇论文提出了一种名为DiG-Flow的新方法,它通过计算和利用观测与动作特征之间的分布差异来引导模型训练,从而显著提升了视觉-语言-动作模型在复杂任务和场景变化下的鲁棒性和性能。


📄 打开原文 PDF