LoST:面向三维形状的语义层级分词方法 / LoST: Level of Semantics Tokenization for 3D Shapes
1️⃣ 一句话总结
这篇论文提出了一种名为LoST的新方法,它通过让AI模型按照语义重要性来理解和编码三维形状,从而用更少的数据量生成更逼真、语义更清晰的三维模型,并显著提升了生成质量和效率。
Tokenization is a fundamental technique in the generative modeling of various modalities. In particular, it plays a critical role in autoregressive (AR) models, which have recently emerged as a compelling option for 3D generation. However, optimal tokenization of 3D shapes remains an open question. State-of-the-art (SOTA) methods primarily rely on geometric level-of-detail (LoD) hierarchies, originally designed for rendering and compression. These spatial hierarchies are often token-inefficient and lack semantic coherence for AR modeling. We propose Level-of-Semantics Tokenization (LoST), which orders tokens by semantic salience, such that early prefixes decode into complete, plausible shapes that possess principal semantics, while subsequent tokens refine instance-specific geometric and semantic details. To train LoST, we introduce Relational Inter-Distance Alignment (RIDA), a novel 3D semantic alignment loss that aligns the relational structure of the 3D shape latent space with that of the semantic DINO feature space. Experiments show that LoST achieves SOTA reconstruction, surpassing previous LoD-based 3D shape tokenizers by large margins on both geometric and semantic reconstruction metrics. Moreover, LoST achieves efficient, high-quality AR 3D generation and enables downstream tasks like semantic retrieval, while using only 0.1%-10% of the tokens needed by prior AR models.
LoST:面向三维形状的语义层级分词方法 / LoST: Level of Semantics Tokenization for 3D Shapes
这篇论文提出了一种名为LoST的新方法,它通过让AI模型按照语义重要性来理解和编码三维形状,从而用更少的数据量生成更逼真、语义更清晰的三维模型,并显著提升了生成质量和效率。
源自 arXiv: 2603.17995