菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-24
📄 Abstract - 3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding

While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional protocol based on text-similarity metrics and LLM-based semantic assessment to ensure faithful and comprehensive evaluations for all methods. Extensive experiments on two benchmarks demonstrate that 3DCity-LLM significantly outperforms existing state-of-the-art methods, offering a promising and meaningful direction for advancing spatial reasoning and urban intelligence. The source code and dataset are available at this https URL.

顶级标签: multi-modal computer vision llm
详细标签: 3d scene understanding vision-language models urban intelligence spatial reasoning city-scale perception 或 搜索:

3DCity-LLM:赋能多模态大语言模型进行三维城市级感知与理解 / 3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding


1️⃣ 一句话总结

这篇论文提出了一个名为3DCity-LLM的新框架,它通过一种从粗到细的特征编码方法和一个大规模高质量数据集,成功地将多模态大语言模型的能力扩展到了三维城市级场景的理解与规划任务上,显著超越了现有方法。

源自 arXiv: 2603.23447