DLM-Scope:基于稀疏自编码器的扩散语言模型机理可解释性框架 / DLM-Scope: Mechanistic Interpretability of Diffusion Language Models via Sparse Autoencoders
1️⃣ 一句话总结
这篇论文提出了首个基于稀疏自编码器的扩散语言模型可解释性框架DLM-Scope,发现该框架不仅能有效提取可解释特征,还能在模型早期层提升性能,并支持更有效的干预,为理解这类新兴模型奠定了基础。
Sparse autoencoders (SAEs) have become a standard tool for mechanistic interpretability in autoregressive large language models (LLMs), enabling researchers to extract sparse, human-interpretable features and intervene on model behavior. Recently, as diffusion language models (DLMs) have become an increasingly promising alternative to the autoregressive LLMs, it is essential to develop tailored mechanistic interpretability tools for this emerging class of models. In this work, we present DLM-Scope, the first SAE-based interpretability framework for DLMs, and demonstrate that trained Top-K SAEs can faithfully extract interpretable features. Notably, we find that inserting SAEs affects DLMs differently than autoregressive LLMs: while SAE insertion in LLMs typically incurs a loss penalty, in DLMs it can reduce cross-entropy loss when applied to early layers, a phenomenon absent or markedly weaker in LLMs. Additionally, SAE features in DLMs enable more effective diffusion-time interventions, often outperforming LLM steering. Moreover, we pioneer certain new SAE-based research directions for DLMs: we show that SAEs can provide useful signals for DLM decoding order; and the SAE features are stable during the post-training phase of DLMs. Our work establishes a foundation for mechanistic interpretability in DLMs and shows a great potential of applying SAEs to DLM-related tasks and algorithms.
DLM-Scope:基于稀疏自编码器的扩散语言模型机理可解释性框架 / DLM-Scope: Mechanistic Interpretability of Diffusion Language Models via Sparse Autoencoders
这篇论文提出了首个基于稀疏自编码器的扩散语言模型可解释性框架DLM-Scope,发现该框架不仅能有效提取可解释特征,还能在模型早期层提升性能,并支持更有效的干预,为理解这类新兴模型奠定了基础。
源自 arXiv: 2602.05859