通过正交正则化识别可干预与可解释的特征 / Identifying Intervenable and Interpretable Features via Orthogonality Regularization
1️⃣ 一句话总结
这篇论文提出了一种使用正交正则化的方法,在微调语言模型时让特征变得几乎正交,从而减少特征间的干扰,提升特征的可解释性和可干预性,同时保持模型性能基本不变。
With recent progress on fine-tuning language models around a fixed sparse autoencoder, we disentangle the decoder matrix into almost orthogonal features. This reduces interference and superposition between the features, while keeping performance on the target dataset essentially unchanged. Our orthogonality penalty leads to identifiable features, ensuring the uniqueness of the decomposition. Further, we find that the distance between embedded feature explanations increases with stricter orthogonality penalty, a desirable property for interpretability. Invoking the $\textit{Independent Causal Mechanisms}$ principle, we argue that orthogonality promotes modular representations amenable to causal intervention. We empirically show that these increasingly orthogonalized features allow for isolated interventions. Our code is available under $\texttt{this https URL}$.
通过正交正则化识别可干预与可解释的特征 / Identifying Intervenable and Interpretable Features via Orthogonality Regularization
这篇论文提出了一种使用正交正则化的方法,在微调语言模型时让特征变得几乎正交,从而减少特征间的干扰,提升特征的可解释性和可干预性,同时保持模型性能基本不变。
源自 arXiv: 2602.04718