EIMC:高效的实例感知多模态协同感知 / EIMC: Efficient Instance-aware Multi-modal Collaborative Perception
1️⃣ 一句话总结
这篇论文提出了一种名为EIMC的高效多模态协同感知新方法,它通过早期协同和基于热图的实例筛选机制,在显著降低通信带宽的同时,大幅提升了自动驾驶中多车协同感知的物体检测精度。
Multi-modal collaborative perception calls for great attention to enhancing the safety of autonomous driving. However, current multi-modal approaches remain a ``local fusion to communication'' sequence, which fuses multi-modal data locally and needs high bandwidth to transmit an individual's feature data before collaborative fusion. EIMC innovatively proposes an early collaborative paradigm. It injects lightweight collaborative voxels, transmitted by neighbor agents, into the ego's local modality-fusion step, yielding compact yet informative 3D collaborative priors that tighten cross-modal alignment. Next, a heatmap-driven consensus protocol identifies exactly where cooperation is needed by computing per-pixel confidence heatmaps. Only the Top-K instance vectors located in these low-confidence, high-discrepancy regions are queried from peers, then fused via cross-attention for completion. Afterwards, we apply a refinement fusion that involves collecting the top-K most confident instances from each agent and enhancing their features using self-attention. The above instance-centric messaging reduces redundancy while guaranteeing that critical occluded objects are recovered. Evaluated on OPV2V and DAIR-V2X, EIMC attains 73.01\% AP@0.5 while reducing byte bandwidth usage by 87.98\% compared with the best published multi-modal collaborative detector. Code publicly released at this https URL.
EIMC:高效的实例感知多模态协同感知 / EIMC: Efficient Instance-aware Multi-modal Collaborative Perception
这篇论文提出了一种名为EIMC的高效多模态协同感知新方法,它通过早期协同和基于热图的实例筛选机制,在显著降低通信带宽的同时,大幅提升了自动驾驶中多车协同感知的物体检测精度。
源自 arXiv: 2603.02532