菜单

🤖 系统
📄 Abstract - Interpretable Physics Reasoning and Performance Taxonomy in Vision-Language Models

As Vision-Language Models (VLMs) grow in sophistication, their ability to perform reasoning is coming under increasing supervision. While they excel at many tasks, their grasp of fundamental scientific principles, such as physics, remains an underexplored frontier. To reflect the advancements in these capabilities, we introduce a novel and accessible framework designed to rigorously evaluate VLMs on their understanding of 2D physics. Our framework features a pragmatic scenario generator that creates a diverse testbed of over 400 problems across four core domains: Projectile Motion, Collision Dynamics, Mechanics, and Fluid Dynamics. Through comprehensive evaluation of four state-of-the-art VLMs, we demonstrate a strong correlation between model scale and reasoning ability, with our top-performing model, Qwen2.5-VL-7B, achieving an overall score of 0.815. We find that while models excel at formulaic problems, they struggle significantly with domains requiring abstract spatial reasoning. By designing this framework, we aim to democratize the study of scientific reasoning in VLMs and foster deeper insights into their capabilities and limitations.

顶级标签: model evaluation natural language processing computer vision
详细标签: physics reasoning vision-language models benchmark spatial reasoning scientific understanding 或 搜索:

📄 论文总结

视觉语言模型的可解释物理推理与性能分类 / Interpretable Physics Reasoning and Performance Taxonomy in Vision-Language Models


1️⃣ 一句话总结

这篇论文提出了一个评估视觉语言模型对二维物理原理理解能力的新框架,发现模型规模与推理能力正相关,但在需要抽象空间推理的领域表现较差。


📄 打开原文 PDF