菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs

Multimodal LLMs can accurately perceive numerical content across modalities yet fail to perform exact multi-digit multiplication when the identical underlying arithmetic problem is presented as numerals, number words, images, or in audio form. Because existing benchmarks often lack systematically paired instances across modalities, it remains difficult to compare genuine arithmetic limits within and across model families. We therefore introduce a controlled multimodal multiplication benchmark that factorially varies digit length, digit sparsity, representation (e.g., numerals vs. number words), and modality (text, rendered images, audio), with paired instances from a reproducible generator. We also define arithmetic load, C, as the product of the total and non-zero digit count as a compact, mechanistically motivated proxy for operation count. Across evaluations, accuracy falls sharply as C grows, often nearing zero by C > 100. Indeed, C remains predictive of performance across modalities and models, with R-squared often > 0.5, nearing the value from more complex measures of arithmetic load that count the number of intermediate arithmetic steps. A separate perception-versus-computation decomposition shows that multimodal degradation is primarily computational rather than perceptual: on matched-perception checks, models are near-perfect (> 99%) across modalities, even when multiplication accuracy drops. Beyond measuring when models fail, we ask which procedures they are predisposed to follow. We introduce a forced-completion loss probe that scores heuristic-specific reasoning prefixes--including columnar multiplication, distributive decomposition, and rounding/compensation. Here, decomposition is favored in both text and vision modalities; heuristic-specific LoRA adapters produce near-orthogonal updates yet degrade accuracy, indicating the base model maintains a well-tuned internal router.

顶级标签: llm multi-modal model evaluation
详细标签: arithmetic reasoning multimodal benchmark computation vs perception heuristic analysis multi-digit multiplication 或 搜索:

多模态大语言模型中的乘法运算:基于文本、图像和音频输入的计算 / Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs


1️⃣ 一句话总结

这篇论文发现,尽管多模态大模型能准确识别不同形式(如文字、图片、声音)的数字,但在执行精确的多位数乘法运算时却普遍失败,其根本原因在于模型的计算能力不足,而非感知能力有缺陷。

源自 arXiv: 2604.18203