基于表征擦除的偏好优化实现大语言模型脱毒 / Detoxifying LLMs via Representation Erasure-Based Preference Optimization
1️⃣ 一句话总结
这篇论文提出了一种名为REPO的新方法,通过从模型内部表征层面直接消除有害信息,而非仅仅抑制有害输出,从而更根本、更鲁棒地解决大语言模型生成有毒内容的问题。
Large language models (LLMs) trained on webscale data can produce toxic outputs, raising concerns for safe deployment. Prior defenses, based on applications of DPO, NPO, and similar algorithms, reduce the likelihood of harmful continuations, but not robustly so: they are vulnerable to adversarial prompting and easily undone by fine-tuning-based relearning attacks. Indeed, research has shown that these edits to the model are superficial: linear probing reveals that harmful "directions" remain present in representations. To address this, we propose Representation Erasure-based Preference Optimization (REPO), reformulating detoxification as a token-level preference problem. Using a novel objective with preference data, we force the representations of toxic continuations to converge toward their benign counterparts. Our mechanistic analysis reveals that this granular approach is critical: unlike baselines, REPO induces deep, localized edits to toxicity-encoding neurons while preserving general model utility. Exhaustive evaluations show that REPO achieves state-of-the-art robustness, stopping sophisticated threats-including relearning attacks and enhanced GCG jailbreaks-where existing representation- and output-based methods fail.
基于表征擦除的偏好优化实现大语言模型脱毒 / Detoxifying LLMs via Representation Erasure-Based Preference Optimization
这篇论文提出了一种名为REPO的新方法,通过从模型内部表征层面直接消除有害信息,而非仅仅抑制有害输出,从而更根本、更鲁棒地解决大语言模型生成有毒内容的问题。
源自 arXiv: 2602.23391