超越描述:为具身智能体建立细粒度动作的认知基准 / Beyond Description: Cognitively Benchmarking Fine-Grained Action for Embodied Agents
1️⃣ 一句话总结
这篇论文提出了一个名为CFG-Bench的新基准测试,专门用于评估多模态大语言模型在理解并生成精细物理动作指令方面的认知能力,发现当前主流模型在此方面存在显著不足,但通过在其数据上进行微调可以有效提升模型在具身任务上的表现。
Multimodal Large Language Models (MLLMs) show promising results as decision-making engines for embodied agents operating in complex, physical environments. However, existing benchmarks often prioritize high-level planning or spatial reasoning, leaving the fine-grained action intelligence required for embodied physical interaction underexplored. To address this gap, we introduce CFG-Bench, a new benchmark designed to systematically evaluate this crucial capability. CFG-Bench consists of 1,368 curated videos paired with 19,562 three-modalities question-answer pairs targeting four cognitive abilities: 1) Physical Interaction, 2) Temporal-Causal Relation, 3) Intentional Understanding, and 4) Evaluative Judgment. Together, these dimensions provide a systematic framework for assessing a model's ability to translate visual observations into actionable knowledge, moving beyond mere surface-level recognition. Our comprehensive evaluation on CFG-Bench reveals that leading MLLMs struggle to produce detailed instructions for physical interactions and exhibit profound limitations in the higher-order reasoning of intention and evaluation. Moreover, supervised fine-tuning (SFT) on our data demonstrates that teaching an MLLMs to articulate fine-grained actions directly translates to significant performance gains on established embodied benchmarks. Our analysis highlights these limitations and offers insights for developing more capable and grounded embodied agents.
超越描述:为具身智能体建立细粒度动作的认知基准 / Beyond Description: Cognitively Benchmarking Fine-Grained Action for Embodied Agents
这篇论文提出了一个名为CFG-Bench的新基准测试,专门用于评估多模态大语言模型在理解并生成精细物理动作指令方面的认知能力,发现当前主流模型在此方面存在显著不足,但通过在其数据上进行微调可以有效提升模型在具身任务上的表现。