基于协同视角的无监督多智能体与单智能体感知 / Unsupervised Multi-agent and Single-agent Perception from Cooperative Views
1️⃣ 一句话总结
这篇论文提出了一种无需人工标注的无监督感知框架,通过让多个智能体共享传感器数据来相互增强,从而同时提升了多智能体协同感知和单智能体独立感知在三维物体检测任务上的性能。
The LiDAR-based multi-agent and single-agent perception has shown promising performance in environmental understanding for robots and automated vehicles. However, there is no existing method that simultaneously solves both multi-agent and single-agent perception in an unsupervised way. By sharing sensor data between multiple agents via communication, this paper discovers two key insights: 1) Improved point cloud density after the data sharing from cooperative views could benefit unsupervised object classification, 2) Cooperative view of multiple agents can be used as unsupervised guidance for the 3D object detection in the single view. Based on these two discovered insights, we propose an Unsupervised Multi-agent and Single-agent (UMS) perception framework that leverages multi-agent cooperation without human annotations to simultaneously solve multi-agent and single-agent perception. UMS combines a learning-based Proposal Purifying Filter to better classify the candidate proposals after multi-agent point cloud density cooperation, followed by a Progressive Proposal Stabilizing module to yield reliable pseudo labels by the easy-to-hard curriculum learning. Furthermore, we design a Cross-View Consensus Learning to use multi-agent cooperative view to guide detection in single-agent view. Experimental results on two public datasets V2V4Real and OPV2V show that our UMS method achieved significantly higher 3D detection performance than the state-of-the-art methods on both multi-agent and single-agent perception tasks in an unsupervised setting.
基于协同视角的无监督多智能体与单智能体感知 / Unsupervised Multi-agent and Single-agent Perception from Cooperative Views
这篇论文提出了一种无需人工标注的无监督感知框架,通过让多个智能体共享传感器数据来相互增强,从而同时提升了多智能体协同感知和单智能体独立感知在三维物体检测任务上的性能。
源自 arXiv: 2604.05354