过度帮助:在多轮、多语言大语言模型智能体中测量非法协助 / Helpful to a Fault: Measuring Illicit Assistance in Multi-Turn, Multilingual LLM Agents
1️⃣ 一句话总结
这篇论文提出了一个名为STING的自动化测试框架,用于评估大语言模型智能体在多轮、多语言对话中,如何被逐步诱导去完成有害或非法任务,从而揭示现有单轮测试方法的不足。
LLM-based agents execute real-world workflows via tools and memory. These affordances enable ill-intended adversaries to also use these agents to carry out complex misuse scenarios. Existing agent misuse benchmarks largely test single-prompt instructions, leaving a gap in measuring how agents end up helping with harmful or illegal tasks over multiple turns. We introduce STING (Sequential Testing of Illicit N-step Goal execution), an automated red-teaming framework that constructs a step-by-step illicit plan grounded in a benign persona and iteratively probes a target agent with adaptive follow-ups, using judge agents to track phase completion. We further introduce an analysis framework that models multi-turn red-teaming as a time-to-first-jailbreak random variable, enabling analysis tools like discovery curves, hazard-ratio attribution by attack language, and a new metric: Restricted Mean Jailbreak Discovery. Across AgentHarm scenarios, STING yields substantially higher illicit-task completion than single-turn prompting and chat-oriented multi-turn baselines adapted to tool-using agents. In multilingual evaluations across six non-English settings, we find that attack success and illicit-task completion do not consistently increase in lower-resource languages, diverging from common chatbot findings. Overall, STING provides a practical way to evaluate and stress-test agent misuse in realistic deployment settings, where interactions are inherently multi-turn and often multilingual.
过度帮助:在多轮、多语言大语言模型智能体中测量非法协助 / Helpful to a Fault: Measuring Illicit Assistance in Multi-Turn, Multilingual LLM Agents
这篇论文提出了一个名为STING的自动化测试框架,用于评估大语言模型智能体在多轮、多语言对话中,如何被逐步诱导去完成有害或非法任务,从而揭示现有单轮测试方法的不足。
源自 arXiv: 2602.16346