菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-16
📄 Abstract - CausalDetox: Causal Head Selection and Intervention for Language Model Detoxification

Large language models (LLMs) frequently generate toxic content, posing significant risks for safe deployment. Current mitigation strategies often degrade generation quality or require costly human annotation. We propose CAUSALDETOX, a framework that identifies and intervenes on the specific attention heads causally responsible for toxic generation. Using the Probability of Necessity and Sufficiency (PNS), we isolate a minimal set of heads that are necessary and sufficient for toxicity. We utilize these components via two complementary strategies: (1) Local Inference-Time Intervention, which constructs dynamic, input-specific steering vectors for context-aware detoxification, and (2) PNS-Guided Fine-Tuning, which permanently unlearns toxic representations. We also introduce PARATOX, a novel benchmark of aligned toxic/non-toxic sentence pairs enabling controlled counterfactual evaluation. Experiments on ToxiGen, ImplicitHate, and ParaDetox show that CAUSALDETOX achieves up to 5.34% greater toxicity reduction compared to baselines while preserving linguistic fluency, and offers a 7x speedup in head selection.

顶级标签: llm model training model evaluation
详细标签: toxicity mitigation causal intervention attention heads inference-time intervention benchmark 或 搜索:

CausalDetox:用于语言模型脱毒处理的因果头选择与干预 / CausalDetox: Causal Head Selection and Intervention for Language Model Detoxification


1️⃣ 一句话总结

这篇论文提出了一个名为CausalDetox的框架,它通过因果分析精准定位并干预大语言模型中导致有害内容生成的关键注意力头,从而在有效降低模型毒性的同时,保持生成文本的流畅性,并显著提升了处理效率。

源自 arXiv: 2604.14602