菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-05
📄 Abstract - Alignment Backfire: Language-Dependent Reversal of Safety Interventions Across 16 Languages in LLM Multi-Agent Systems

In perpetrator treatment, a recurring observation is the dissociation between insight and action: offenders articulate remorse yet behavioral change does not follow. We report four preregistered studies (1,584 multi-agent simulations across 16 languages and three model families) demonstrating that alignment interventions in large language models produce a structurally analogous phenomenon: surface safety that masks or generates collective pathology and internal dissociation. In Study 1 (N = 150), increasing alignment-instructed agents reduced collective pathology in English (g = -1.844, p < .0001) but amplified it in Japanese (g = +0.771, p = .038)--a directional reversal we term &#34;alignment backfire.&#34; Study 2 (N = 1,174) extended to 16 languages: alignment-induced dissociation was near-universal (15/16 languages; beta = 0.0667, p < .0001), while collective pathology bifurcated along cultural-linguistic lines (interaction beta = 0.0684, p = .0003), correlating with Power Distance Index (r = 0.474, p = .064). Study 3 (N = 180) tested individuation as countermeasure; individuated agents became the primary source of both pathology and dissociation (DI = +1.120) with conformity above 84%--demonstrating iatrogenesis. Study 4 (N = 80) validated patterns across Llama 3.3 70B, GPT-4o-mini, and Qwen3-Next-80B-A3B, confirming English safety is model-general while Japanese backfire is model-specific. These findings reframe alignment as a behavioral intervention subject to risk homeostasis and iatrogenesis. Language space--the linguistic, pragmatic, and cultural properties inherited from training data--structurally determines alignment outcomes. Safety validated in English does not transfer to other languages, and prompt-level interventions cannot override language-space-level constraints.

顶级标签: llm multi-agents model evaluation
详细标签: alignment safety cross-lingual evaluation multi-agent simulation safety intervention language-space 或 搜索:

对齐适得其反:大语言模型多智能体系统中安全干预措施在16种语言间的语言依赖性逆转 / Alignment Backfire: Language-Dependent Reversal of Safety Interventions Across 16 Languages in LLM Multi-Agent Systems


1️⃣ 一句话总结

这项研究发现,旨在提升大语言模型安全性的‘对齐’干预措施,其效果高度依赖于语言和文化背景,在某些语言(如英语)中有效,但在另一些语言(如日语)中反而会加剧有害行为,揭示了单一语言(尤其是英语)的安全评估存在严重局限性。

源自 arXiv: 2603.04904