学习多智能体交互中的概率责任分配 / Learning Probabilistic Responsibility Allocations for Multi-Agent Interactions
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过学习人类在多智能体交互(如驾驶场景)中如何分配责任的不确定性,来帮助设计更符合社会规范且可信赖的自主系统。
Human behavior in interactive settings is shaped not only by individual objectives but also by shared constraints with others, such as safety. Understanding how people allocate responsibility, i.e., how much one deviates from their desired policy to accommodate others, can inform the design of socially compliant and trustworthy autonomous systems. In this work, we introduce a method for learning a probabilistic responsibility allocation model that captures the multimodal uncertainty inherent in multi-agent interactions. Specifically, our approach leverages the latent space of a conditional variational autoencoder, combined with techniques from multi-agent trajectory forecasting, to learn a distribution over responsibility allocations conditioned on scene and agent context. Although ground-truth responsibility labels are unavailable, the model remains tractable by incorporating a differentiable optimization layer that maps responsibility allocations to induced controls, which are available. We evaluate our method on the INTERACTION driving dataset and demonstrate that it not only achieves strong predictive performance but also provides interpretable insights, through the lens of responsibility, into patterns of multi-agent interaction.
学习多智能体交互中的概率责任分配 / Learning Probabilistic Responsibility Allocations for Multi-Agent Interactions
这篇论文提出了一种新方法,通过学习人类在多智能体交互(如驾驶场景)中如何分配责任的不确定性,来帮助设计更符合社会规范且可信赖的自主系统。
源自 arXiv: 2604.13128