菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-29
📄 Abstract - Chat-Scene++: Exploiting Context-Rich Object Identification for 3D LLM

Recent advancements in multi-modal large language models (MLLMs) have shown strong potential for 3D scene understanding. However, existing methods struggle with fine-grained object grounding and contextual reasoning, limiting their ability to interpret and interact with complex 3D environments. In this paper, we present Chat-Scene++, an MLLM framework that represents 3D scenes as context-rich object sequences. By structuring scenes as sequences of objects with contextual semantics, Chat-Scene++ enables object-centric representation and interaction. It decomposes a 3D scene into object representations paired with identifier tokens, allowing LLMs to follow instructions across diverse 3D vision-language tasks. To capture inter-object relationships and global semantics, Chat-Scene++ extracts context-rich object features using large-scale pre-trained 3D scene-level and 2D image-level encoders, unlike the isolated per-object features in Chat-Scene. Its flexible object-centric design also supports grounded chain-of-thought (G-CoT) reasoning, enabling the model to distinguish objects at both category and spatial levels during multi-step inference. Without the need for additional task-specific heads or fine-tuning, Chat-Scene++ achieves state-of-the-art performance on five major 3D vision-language benchmarks: ScanRefer, Multi3DRefer, Scan2Cap, ScanQA, and SQA3D. These results highlight its effectiveness in scene comprehension, object grounding, and spatial reasoning. Additionally, without reconstructing 3D worlds through computationally expensive processes, we demonstrate its applicability to real-world scenarios using only 2D inputs.

顶级标签: multi-modal natural language processing computer vision
详细标签: 3d scene understanding object grounding vision-language models contextual reasoning multi-modal llms 或 搜索:

Chat-Scene++:利用上下文丰富的物体识别技术增强三维大语言模型 / Chat-Scene++: Exploiting Context-Rich Object Identification for 3D LLM


1️⃣ 一句话总结

这篇论文提出了一个名为Chat-Scene++的新框架,它通过将三维场景表示为一系列带有丰富上下文信息的物体,让大语言模型能更准确地理解和回答关于复杂三维环境的问题,无需额外训练就能在多项标准测试中取得领先性能。

源自 arXiv: 2603.27507