菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - 360° Image Perception with MLLMs: A Comprehensive Benchmark and a Training-Free Method

Multimodal Large Language Models (MLLMs) have shown impressive abilities in understanding and reasoning over conventional images. However, their perception of 360° images remains largely underexplored. Unlike conventional images, 360° images capture the entire surrounding environment, enabling holistic spatial reasoning but introducing challenges such as geometric distortion and complex spatial relations. To comprehensively assess MLLMs' capabilities to perceive 360° images, we introduce 360Bench, a Visual Question Answering (VQA) benchmark featuring 7K-resolution 360° images, seven representative (sub)tasks with annotations carefully curated by human annotators. Using 360Bench, we systematically evaluate seven MLLMs and six enhancement methods, revealing their shortcomings in 360° image perception. To address these challenges, we propose Free360, a training-free scene-graph-based framework for high-resolution 360° VQA. Free360 decomposes the reasoning process into modular steps, applies adaptive spherical image transformations to 360° images tailored to each step, and seamlessly integrates the resulting information into a unified graph representation for answer generation. Experiments show that Free360 consistently improves its base MLLM and provides a strong training-free solution for 360° VQA tasks. The source code and dataset will be publicly released upon acceptance.

顶级标签: multi-modal benchmark model evaluation
详细标签: 360° image perception visual question answering multimodal llms spatial reasoning training-free method 或 搜索:

多模态大语言模型对360°图像的感知:一个综合性基准与一种免训练方法 / 360° Image Perception with MLLMs: A Comprehensive Benchmark and a Training-Free Method


1️⃣ 一句话总结

这篇论文提出了首个用于评估多模态大模型理解360°全景图像能力的基准测试,并设计了一种无需额外训练、基于场景图分解推理的框架,显著提升了模型在全景视觉问答任务上的表现。

源自 arXiv: 2603.16179