通过稀疏权重编辑实现多语言安全对齐 / Multilingual Safety Alignment Via Sparse Weight Editing
1️⃣ 一句话总结
这篇论文提出了一种无需额外训练的新方法,通过精准修改大语言模型中少数关键的‘安全神经元’,将低资源语言的有害内容映射到高资源语言的安全处理模式中,从而低成本地解决不同语言间安全防护能力不均衡的问题。
Large Language Models (LLMs) exhibit significant safety disparities across languages, with low-resource languages (LRLs) often bypassing safety guardrails established for high-resource languages (HRLs) like English. Existing solutions, such as multilingual supervised fine-tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), are computationally expensive and dependent on scarce multilingual safety data. In this work, we propose a novel, training-free alignment framework based on Sparse Weight Editing. Identifying that safety capabilities are localized within a sparse set of safety neurons, we formulate the cross-lingual alignment problem as a constrained linear transformation. We derive a closed-form solution to optimally map the harmful representations of LRLs to the robust safety subspaces of HRLs, while preserving general utility via a null-space projection constraint. Extensive experiments across 8 languages and multiple model families (Llama-3, Qwen-2.5) demonstrate that our method substantially reduces Attack Success Rate (ASR) in LRLs with negligible impact on general reasoning capabilities, all achieved with a single, data-efficient calculation.
通过稀疏权重编辑实现多语言安全对齐 / Multilingual Safety Alignment Via Sparse Weight Editing
这篇论文提出了一种无需额外训练的新方法,通过精准修改大语言模型中少数关键的‘安全神经元’,将低资源语言的有害内容映射到高资源语言的安全处理模式中,从而低成本地解决不同语言间安全防护能力不均衡的问题。
源自 arXiv: 2602.22554