FewMMBench:一个用于多模态小样本学习的基准测试 / FewMMBench: A Benchmark for Multimodal Few-Shot Learning
1️⃣ 一句话总结
这篇论文提出了一个名为FewMMBench的综合性基准测试,专门用于评估多模态大语言模型在只提供少量示例(小样本)情况下的学习能力,并通过测试发现,当前模型在增加示例或使用复杂推理提示后性能提升有限,甚至可能下降。
As multimodal large language models (MLLMs) advance in handling interleaved image-text data, assessing their few-shot learning capabilities remains an open challenge. In this paper, we introduce FewMMBench, a comprehensive benchmark designed to evaluate MLLMs under few-shot conditions, with a focus on In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting. Covering a diverse suite of multimodal understanding tasks, from attribute recognition to temporal reasoning, FewMMBench enables systematic analysis across task types, model families, and prompting strategies. We evaluate 26 open-weight MLLMs from six model families across zero-shot, few-shot, and CoT-augmented few-shot settings. Our findings reveal that instruction-tuned models exhibit strong zero-shot performance but benefit minimally, or even regress, with additional demonstrations or CoT reasoning. Retrieval-based demonstrations and increased context size also yield limited gains. These results highlight FewMMBench as a rigorous testbed for diagnosing and advancing few-shot capabilities in multimodal LLMs. The data is available at: this https URL
FewMMBench:一个用于多模态小样本学习的基准测试 / FewMMBench: A Benchmark for Multimodal Few-Shot Learning
这篇论文提出了一个名为FewMMBench的综合性基准测试,专门用于评估多模态大语言模型在只提供少量示例(小样本)情况下的学习能力,并通过测试发现,当前模型在增加示例或使用复杂推理提示后性能提升有限,甚至可能下降。
源自 arXiv: 2602.21854