TreeTeaming:通过分层策略探索对视觉语言模型进行自主红队测试 / TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration
1️⃣ 一句话总结
这篇论文提出了一种名为TreeTeaming的自动化红队测试新方法,它通过让大型语言模型自主构建和扩展攻击策略树,来更有效地发现视觉语言模型的安全漏洞,相比传统方法攻击成功率更高且生成的攻击更隐蔽。
The rapid advancement of Vision-Language Models (VLMs) has brought their safety vulnerabilities into sharp focus. However, existing red teaming methods are fundamentally constrained by an inherent linear exploration paradigm, confining them to optimizing within a predefined strategy set and preventing the discovery of novel, diverse exploits. To transcend this limitation, we introduce TreeTeaming, an automated red teaming framework that reframes strategy exploration from static testing to a dynamic, evolutionary discovery process. At its core lies a strategic Orchestrator, powered by a Large Language Model (LLM), which autonomously decides whether to evolve promising attack paths or explore diverse strategic branches, thereby dynamically constructing and expanding a strategy tree. A multimodal actuator is then tasked with executing these complex strategies. In the experiments across 12 prominent VLMs, TreeTeaming achieves state-of-the-art attack success rates on 11 models, outperforming existing methods and reaching up to 87.60\% on GPT-4o. The framework also demonstrates superior strategic diversity over the union of previously public jailbreak strategies. Furthermore, the generated attacks exhibit an average toxicity reduction of 23.09\%, showcasing their stealth and subtlety. Our work introduces a new paradigm for automated vulnerability discovery, underscoring the necessity of proactive exploration beyond static heuristics to secure frontier AI models.
TreeTeaming:通过分层策略探索对视觉语言模型进行自主红队测试 / TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration
这篇论文提出了一种名为TreeTeaming的自动化红队测试新方法,它通过让大型语言模型自主构建和扩展攻击策略树,来更有效地发现视觉语言模型的安全漏洞,相比传统方法攻击成功率更高且生成的攻击更隐蔽。
源自 arXiv: 2603.22882