菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Evaluating Cross-Modal Reasoning Ability and Problem Characteristics with Multimodal Item Response Theory

Multimodal Large Language Models (MLLMs) have recently emerged as general architectures capable of reasoning over diverse modalities. Benchmarks for MLLMs should measure their ability for cross-modal integration. However, current benchmarks are filled with shortcut questions, which can be solved using only a single modality, thereby yielding unreliable rankings. For example, in vision-language cases, we can find the correct answer without either the image or the text. These low-quality questions unnecessarily increase the size and computational requirements of benchmarks. We introduce a multi-modal and multidimensional item response theory framework (M3IRT) that extends classical IRT by decomposing both model ability and item difficulty into image-only, text-only, and cross-modal components. M3IRT estimates cross-modal ability of MLLMs and each question's cross-modal difficulty, enabling compact, high-quality subsets that better reflect multimodal reasoning. Across 24 VLMs on three benchmarks, M3IRT prioritizes genuinely cross-modal questions over shortcuts and preserves ranking fidelity even when 50% of items are artificially generated low-quality questions, thereby reducing evaluation cost while improving reliability. M3IRT thus offers a practical tool for assessing cross-modal reasoning and refining multimodal benchmarks.

顶级标签: multi-modal model evaluation benchmark
详细标签: item response theory cross-modal reasoning evaluation framework vision-language models benchmark quality 或 搜索:

利用多模态项目反应理论评估跨模态推理能力与问题特性 / Evaluating Cross-Modal Reasoning Ability and Problem Characteristics with Multimodal Item Response Theory


1️⃣ 一句话总结

本文提出了一种名为M3IRT的多模态项目反应理论框架,它能有效区分并筛选出真正需要跨模态推理的高质量测试问题,从而以更低的评估成本更可靠地衡量多模态大模型的综合理解能力。

源自 arXiv: 2603.02663