菜单

🤖 系统
📄 Abstract - Part-X-MLLM: Part-aware 3D Multimodal Large Language Model

We introduce Part-X-MLLM, a native 3D multimodal large language model that unifies diverse 3D tasks by formulating them as programs in a structured, executable grammar. Given an RGB point cloud and a natural language prompt, our model autoregressively generates a single, coherent token sequence encoding part-level bounding boxes, semantic descriptions, and edit commands. This structured output serves as a versatile interface to drive downstream geometry-aware modules for part-based generation and editing. By decoupling the symbolic planning from the geometric synthesis, our approach allows any compatible geometry engine to be controlled through a single, language-native frontend. We pre-train a dual-encoder architecture to disentangle structure from semantics and instruction-tune the model on a large-scale, part-centric dataset. Experiments demonstrate that our model excels at producing high-quality, structured plans, enabling state-of-the-art performance in grounded Q\&A, compositional generation, and localized editing through one unified interface. Project page: this https URL

顶级标签: multi-modal natural language processing computer vision
详细标签: 3d multimodal part-aware reasoning structured generation geometry synthesis point cloud processing 或 搜索:

📄 论文总结

Part-X-MLLM:具备部件感知能力的3D多模态大语言模型 / Part-X-MLLM: Part-aware 3D Multimodal Large Language Model


1️⃣ 一句话总结

这篇论文提出了一个统一的3D多模态大模型,能够根据语言指令自动生成包含部件边界框和编辑命令的结构化程序,从而实现对3D物体的智能生成与编辑。


📄 打开原文 PDF