MaMa:一种用于设计安全智能体系统的博弈论方法 / MaMa: A Game-Theoretic Approach for Designing Safe Agentic Systems
1️⃣ 一句话总结
这篇论文提出了一种名为MaMa的博弈论算法,通过让系统设计者与一个模拟的‘元对手’进行对抗性博弈,自动设计出即使在部分智能体被恶意控制时也能保持安全的多智能体系统。
LLM-based multi-agent systems have demonstrated impressive capabilities, but they also introduce significant safety risks when individual agents fail or behave adversarially. In this work, we study the automated design of agentic systems that remain safe even when a subset of agents is compromised. We formalize this challenge as a Stackelberg security game between a system designer (the Meta-Agent) and a best-responding Meta-Adversary that selects and compromises a subset of agents to minimize safety. We propose Meta-Adversary-Meta-Agent (MaMa), a novel algorithm for approximately solving this game and automatically designing safe agentic systems. Our approach uses LLM-based adversarial search, where the Meta-Agent iteratively proposes system designs and receives feedback based on the strongest attacks discovered by the Meta-Adversary. Empirical evaluations across diverse environments show that systems designed with MaMa consistently defend against worst-case attacks while maintaining performance comparable to systems optimized solely for task success. Moreover, the resulting systems generalize to stronger adversaries, as well as ones with different attack objectives or underlying LLMs, demonstrating robust safety beyond the training setting.
MaMa:一种用于设计安全智能体系统的博弈论方法 / MaMa: A Game-Theoretic Approach for Designing Safe Agentic Systems
这篇论文提出了一种名为MaMa的博弈论算法,通过让系统设计者与一个模拟的‘元对手’进行对抗性博弈,自动设计出即使在部分智能体被恶意控制时也能保持安全的多智能体系统。
源自 arXiv: 2602.04431