菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-25
📄 Abstract - Le MuMo JEPA: Multi-Modal Self-Supervised Representation Learning with Learnable Fusion Tokens

Self-supervised learning has emerged as a powerful paradigm for learning visual representations without manual annotations, yet most methods still operate on a single modality and therefore miss the complementary structure available from heterogeneous sensors. We present Le MuMo JEPA, a self-supervised framework that learns unified representations from RGB images and aligned companion modalities. In our driving experiments, the second modality is camera-aligned LiDAR depth; we also evaluate RGB-thermal training and transfer on the Teledyne FLIR ADAS benchmark. Our approach extends LeJEPA to the multi-modal setting by learning fusion tokens that act as a latent bottleneck between modality-specific patch stems inside a shared transformer. Our default model employs a pruned fusion strategy: after an initial cross-modal attention layer, modality-specific tokens are dropped, forcing cross-modal information into the shared fusion-token grid as an efficient latent bottleneck before Sketched Isotropic Gaussian Regularization (SIGReg) is applied to the joint multimodal CLS embedding. On Waymo, Le MuMo JEPA gives the strongest performance-efficiency trade-off on downstream patch probes among the from-scratch multimodal baselines, improving CenterNet detection and dense depth while remaining competitive on segmentation. Under from-scratch training on nuScenes, Le MuMo JEPA remains the strongest model, and it also gives the best FLIR results, especially after Waymo-initialized fine-tuning. It also retains the best overall accuracy-efficiency balance in our study at substantially lower compute, memory, and estimated training time.

顶级标签: multi-modal model training computer vision
详细标签: self-supervised learning representation learning fusion tokens multi-modal fusion vision transformers 或 搜索:

Le MuMo JEPA:一种使用可学习融合令牌的多模态自监督表示学习框架 / Le MuMo JEPA: Multi-Modal Self-Supervised Representation Learning with Learnable Fusion Tokens


1️⃣ 一句话总结

这篇论文提出了一种名为Le MuMo JEPA的新型自监督学习框架,它通过引入可学习的‘融合令牌’来高效整合图像和激光雷达深度等多模态数据,从而在自动驾驶等任务中,以更低的计算成本学习到性能更强的统一特征表示。

源自 arXiv: 2603.24327