菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Cross-Attentive Multiview Fusion of Vision-Language Embeddings

Vision-language models have been key to the development of open-vocabulary 2D semantic segmentation. Lifting these models from 2D images to 3D scenes, however, remains a challenging problem. Existing approaches typically back-project and average 2D descriptors across views, or heuristically select a single representative one, often resulting in suboptimal 3D representations. In this work, we introduce a novel multiview transformer architecture that cross-attends across vision-language descriptors from multiple viewpoints and fuses them into a unified per-3D-instance embedding. As a second contribution, we leverage multiview consistency as a self-supervision signal for this fusion, which significantly improves performance when added to a standard supervised target-class loss. Our Cross-Attentive Multiview Fusion, which we denote with its acronym CAMFusion, not only consistently outperforms naive averaging or single-view descriptor selection, but also achieves state-of-the-art results on 3D semantic and instance classification benchmarks, including zero-shot evaluations on out-of-domain datasets.

顶级标签: computer vision multi-modal model training
详细标签: 3d scene understanding vision-language models multiview fusion self-supervised learning semantic segmentation 或 搜索:

基于交叉注意力的多视角视觉-语言嵌入融合 / Cross-Attentive Multiview Fusion of Vision-Language Embeddings


1️⃣ 一句话总结

这篇论文提出了一种名为CAMFusion的新方法,它通过一个多视角交叉注意力变换器,智能地融合来自不同角度的视觉和语言信息,从而为3D物体生成更优的语义表示,并在多个3D场景理解任务中取得了领先的性能。

源自 arXiv: 2604.12551