菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-12
📄 Abstract - MM-CondChain: A Programmatically Verified Benchmark for Visually Grounded Deep Compositional Reasoning

Multimodal Large Language Models (MLLMs) are increasingly used to carry out visual workflows such as navigating GUIs, where the next step depends on verified visual compositional conditions (e.g., "if a permission dialog appears and the color of the interface is green, click Allow") and the process may branch or terminate early. Yet this capability remains under-evaluated: existing benchmarks focus on shallow-compositions or independent-constraints rather than deeply chained compositional conditionals. In this paper, we introduce MM-CondChain, a benchmark for visually grounded deep compositional reasoning. Each benchmark instance is organized as a multi-layer reasoning chain, where every layer contains a non-trivial compositional condition grounded in visual evidence and built from multiple objects, attributes, or relations. To answer correctly, an MLLM must perceive the image in detail, reason over multiple visual elements at each step, and follow the resulting execution path to the final outcome. To scalably construct such workflow-style data, we propose an agentic synthesis pipeline: a Planner orchestrates layer-by-layer generation of compositional conditions, while a Verifiable Programmatic Intermediate Representation (VPIR) ensures each layer's condition is mechanically verifiable. A Composer then assembles these verified layers into complete instructions. Using this pipeline, we construct benchmarks across three visual domains: natural images, data charts, and GUI trajectories. Experiments on a range of MLLMs show that even the strongest model attains only 53.33 Path F1, with sharp drops on hard negatives and as depth or predicate complexity grows, confirming that deep compositional reasoning remains a fundamental challenge.

顶级标签: multi-modal benchmark model evaluation
详细标签: multimodal reasoning compositional reasoning visual workflows verifiable evaluation mllm benchmarking 或 搜索:

MM-CondChain:一个用于视觉基础深度组合推理的、可通过程序验证的基准 / MM-CondChain: A Programmatically Verified Benchmark for Visually Grounded Deep Compositional Reasoning


1️⃣ 一句话总结

这篇论文提出了一个名为MM-CondChain的新基准测试,专门用来评估多模态大语言模型在理解复杂视觉场景后,进行多步骤、有条件分支的深度逻辑推理的能力,结果发现当前最先进的模型在这项任务上仍有很大挑战。

源自 arXiv: 2603.12266