菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - C^2ROPE: Causal Continuous Rotary Positional Encoding for 3D Large Multimodal-Models Reasoning

Recent advances in 3D Large Multimodal Models (LMMs) built on Large Language Models (LLMs) have established the alignment of 3D visual features with LLM representations as the dominant paradigm. However, the inherited Rotary Position Embedding (RoPE) introduces limitations for multimodal processing. Specifically, applying 1D temporal positional indices disrupts the continuity of visual features along the column dimension, resulting in spatial locality loss. Moreover, RoPE follows the prior that temporally closer image tokens are more causally related, leading to long-term decay in attention allocation and causing the model to progressively neglect earlier visual tokens as the sequence length increases. To address these issues, we propose C^2RoPE, an improved RoPE that explicitly models local spatial Continuity and spatial Causal relationships for visual processing. C^2RoPE introduces a spatio-temporal continuous positional embedding mechanism for visual tokens. It first integrates 1D temporal positions with Cartesian-based spatial coordinates to construct a triplet hybrid positional index, and then employs a frequency allocation strategy to encode spatio-temporal positional information across the three index components. Additionally, we introduce Chebyshev Causal Masking, which determines causal dependencies by computing the Chebyshev distance of image tokens in 2D space. Evaluation results across various benchmarks, including 3D scene reasoning and 3D visual question answering, demonstrate C^2RoPE's effectiveness. The code is be available at this https URL.

顶级标签: llm multi-modal model training
详细标签: positional encoding 3d vision causal reasoning multimodal models attention mechanism 或 搜索:

C^2ROPE:用于三维大型多模态模型推理的因果连续旋转位置编码 / C^2ROPE: Causal Continuous Rotary Positional Encoding for 3D Large Multimodal-Models Reasoning


1️⃣ 一句话总结

这篇论文提出了一种名为C^2ROPE的改进位置编码方法,通过同时考虑视觉特征的空间连续性和因果依赖关系,解决了现有三维大模型在处理长序列视觉信息时容易丢失空间细节和忽略早期内容的问题,从而提升了模型在三维场景理解和问答任务上的表现。

源自 arXiv: 2602.10551