TAG:视觉-语言-动作模型中用于稳定目标无关对象中心推理的引导方法 / TAG: Target-Agnostic Guidance for Stable Object-Centric Inference in Vision-Language-Action Models
1️⃣ 一句话总结
这篇论文提出了一种名为TAG的简单推理时引导方法,通过对比原始观察和抹除目标物体后观察的预测差异,来增强机器人视觉-语言-动作模型在复杂杂乱场景中准确识别和操作目标物体的能力,而无需修改模型结构。
Vision--Language--Action (VLA) policies have shown strong progress in mapping language instructions and visual observations to robotic actions, yet their reliability degrades in cluttered scenes with distractors. By analyzing failure cases, we find that many errors do not arise from infeasible motions, but from instance-level grounding failures: the policy often produces a plausible grasp trajectory that lands slightly off-target or even on the wrong object instance. To address this issue, we propose TAG (Target-Agnostic Guidance), a simple inference-time guidance mechanism that explicitly reduces distractor- and appearance-induced bias in VLA policies. Inspired by classifier-free guidance (CFG), TAG contrasts policy predictions under the original observation and an object-erased observation, and uses their difference as a residual steering signal that strengthens the influence of object evidence in the decision process. TAG does not require modifying the policy architecture and can be integrated with existing VLA policies with minimal training and inference changes. We evaluate TAG on standard manipulation benchmarks, including LIBERO, LIBERO-Plus, and VLABench, where it consistently improves robustness under clutter and reduces near-miss and wrong-object executions.
TAG:视觉-语言-动作模型中用于稳定目标无关对象中心推理的引导方法 / TAG: Target-Agnostic Guidance for Stable Object-Centric Inference in Vision-Language-Action Models
这篇论文提出了一种名为TAG的简单推理时引导方法,通过对比原始观察和抹除目标物体后观察的预测差异,来增强机器人视觉-语言-动作模型在复杂杂乱场景中准确识别和操作目标物体的能力,而无需修改模型结构。
源自 arXiv: 2603.24584