📄 论文总结
MME-CC:一个具有挑战性的认知能力多模态评估基准 / MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity
1️⃣ 一句话总结
这篇论文提出了一个名为MME-CC的新基准,专门用于系统评估多模态大模型在视觉相关认知能力(如空间、几何和知识推理)上的表现,发现当前模型在这些方面普遍较弱,并揭示了常见的错误模式,旨在推动未来模型设计的改进。
As reasoning models scale rapidly, the essential role of multimodality in human cognition has come into sharp relief, driving a growing need to probe vision-centric cognitive behaviors. Yet, existing multimodal benchmarks either overemphasize textual reasoning or fall short of systematically capturing vision-centric cognitive behaviors, leaving the cognitive capacity of MLLMs insufficiently assessed. To address this limitation, we introduce MME-CC (Multi-Modal Evaluation benchmark of Cognitive Capacity), a vision-grounded benchmark that organizes 11 representative reasoning tasks into three fundamental categories of visual information: spatial, geometric, and knowledge-based reasoning, and provides fine-grained analyses of MLLMs' cognitive capacity across these dimensions. Based on MME-CC, we conduct extensive experiments over 16 representative MLLMs. Our study reveals that closed-source models currently lead overall (e.g., 42.66 for Gemini-2.5-Pro vs. 30.45 for GLM-4.5V), while spatial and geometric reasoning remain broadly weak (less than or equal to 30%). We further identify common error patterns, including orientation mistakes, fragile cross-view identity persistence, and poor adherence to counterfactual instructions, and observe that Chain-of-Thought typically follows a three-stage process (extract -> reason -> verify) with heavy reliance on visual extraction. We hope this work catalyzes a shift toward treating the cognitive capacity of MLLMs as central to both evaluation and model design.
MME-CC:一个具有挑战性的认知能力多模态评估基准 / MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity
这篇论文提出了一个名为MME-CC的新基准,专门用于系统评估多模态大模型在视觉相关认知能力(如空间、几何和知识推理)上的表现,发现当前模型在这些方面普遍较弱,并揭示了常见的错误模式,旨在推动未来模型设计的改进。