菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-04
📄 Abstract - Active Asymmetric Multi-Agent Multimodal Learning under Uncertainty

Multi-agent systems are increasingly equipped with heterogeneous multimodal sensors, enabling richer perception but introducing modality-specific and agent-dependent uncertainty. Existing multi-agent collaboration frameworks typically reason at the agent level, assume homogeneous sensing, and handle uncertainty implicitly, limiting robustness under sensor corruption. We propose Active Asymmetric Multi-Agent Multimodal Learning under Uncertainty (A2MAML), a principled approach for uncertainty-aware, modality-level collaboration. A2MAML models each modality-specific feature as a stochastic estimate with uncertainty prediction, actively selects reliable agent-modality pairs, and aggregates information via Bayesian inverse-variance weighting. This formulation enables fine-grained, modality-level fusion, supports asymmetric modality availability, and provides a principled mechanism to suppress corrupted or noisy modalities. Extensive experiments on connected autonomous driving scenarios for collaborative accident detection demonstrate that A2MAML consistently outperforms both single-agent and collaborative baselines, achieving up to 18.7% higher accident detection rate.

顶级标签: multi-modal multi-agents systems
详细标签: uncertainty estimation bayesian fusion sensor fusion collaborative perception autonomous driving 或 搜索:

不确定性下的主动非对称多智能体多模态学习 / Active Asymmetric Multi-Agent Multimodal Learning under Uncertainty


1️⃣ 一句话总结

这篇论文提出了一种名为A2MAML的新方法,它能让装备了不同传感器的多台机器(如自动驾驶汽车)在感知环境时,智能地评估每个传感器数据的可靠程度,并优先融合更可信的数据,从而在协作任务(如事故检测)中显著提升准确性和抗干扰能力。

源自 arXiv: 2602.04763