论知识编辑去毒方法的鲁棒性 / On the Robustness of Knowledge Editing for Detoxification
1️⃣ 一句话总结
这篇论文研究发现,基于知识编辑的大语言模型去毒方法存在局限性,其有效性仅在特定模型、少量目标语言和有限编辑目标下才可靠,否则可能出现‘虚假去毒’或效果下降的问题。
Knowledge-Editing-based (KE-based) detoxification has emerged as a promising approach for mitigating harmful behaviours in Large Language Models. Existing evaluations, however, largely rely on automatic toxicity classifiers, implicitly assuming that reduced toxicity scores reflect genuine behavioural suppression. In this work, we propose a robustness-oriented evaluation framework for KE-based detoxification that examines its reliability beyond standard classifier-based metrics along three dimensions: optimisation robustness, compositional robustness, and cross-lingual robustness. We identify pseudo-detoxification as a common failure mode, where apparent toxicity reductions arise from degenerate generation behaviours rather than meaningful suppression of unsafe content. We further show that detoxification effectiveness degrades when multiple unsafe behaviours are edited jointly, and that both monolingual and cross-lingual detoxification remain effective only under specific model-method combinations. Overall, our results indicate that KE-based detoxification is robust only for certain models, limited numbers of detoxification objectives, and a subset of languages.
论知识编辑去毒方法的鲁棒性 / On the Robustness of Knowledge Editing for Detoxification
这篇论文研究发现,基于知识编辑的大语言模型去毒方法存在局限性,其有效性仅在特定模型、少量目标语言和有限编辑目标下才可靠,否则可能出现‘虚假去毒’或效果下降的问题。
源自 arXiv: 2602.10504