菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-06
📄 Abstract - Feature Starvation as Geometric Instability in Sparse Autoencoders

Sparse autoencoders (SAEs) are used to disentangle the dense, polysemantic internal representations of large language models (LLMs) into interpretable, monosemantic concepts. However, standard $\ell_1$-regularized SAEs suffer from feature starvation (dead neurons) and shrinkage bias, often requiring computationally expensive heuristic resampling and nondifferentiable hard-masking methods to bypass these challenges. We argue that feature starvation is not merely an empirical artifact of poor data diversity, but a fundamental optimization-geometric pathology of overcomplete dictionaries: the $\ell_1$-induced sparse coding map is unstable and fundamentally misaligned with shallow, amortized encoders. To address this structural instability, we introduce adaptive elastic net SAEs (AEN-SAEs), a fully differentiable architecture grounded in classical sparse regression. AEN-SAEs combine an $\ell_2$ structural term that enforces strong convexity and Lipschitz stability with adaptive $\ell_1$ reweighting that eliminates shrinkage bias and suppresses spurious features, thereby jointly controlling the curvature and interaction structure of the induced polyhedral geometry. Theoretically, we show that AEN-SAEs yield a Lipschitz-continuous sparse coding map and recover the global feature support under mild assumptions. Empirically, across synthetic settings and LLMs (Pythia 70M, Llama 3.1 8B), AEN-SAEs mitigate feature starvation without auxiliary heuristics while maintaining competitive reconstruction abilities.

顶级标签: understanding theory
详细标签: sparse autoencoders feature starvation optimization geometry l1 regularization interpretability 或 搜索:

稀疏自编码器中的特征饥饿:一种几何不稳定性问题 / Feature Starvation as Geometric Instability in Sparse Autoencoders


1️⃣ 一句话总结

该论文揭示了标准稀疏自编码器中“特征饥饿”问题的根本原因是模型内部的一种几何不稳定性,并提出了一种名为自适应弹性网络稀疏自编码器的新方法,通过数学手段彻底解决了这一顽疾,从而让AI模型在解释自身内部运作时更稳定、更高效。

源自 arXiv: 2605.05341