概念成分分析:一种用于大语言模型概念提取的原则性方法 / Concept Component Analysis: A Principled Approach for Concept Extraction in LLMs
1️⃣ 一句话总结
这篇论文提出了一种名为‘概念成分分析’的新方法,它基于一个理论模型,通过线性分解大语言模型的内部表示来提取人类可理解的概念,从而解决了现有方法缺乏理论依据的难题。
Developing human understandable interpretation of large language models (LLMs) becomes increasingly critical for their deployment in essential domains. Mechanistic interpretability seeks to mitigate the issues through extracts human-interpretable process and concepts from LLMs' activations. Sparse autoencoders (SAEs) have emerged as a popular approach for extracting interpretable and monosemantic concepts by decomposing the LLM internal representations into a dictionary. Despite their empirical progress, SAEs suffer from a fundamental theoretical ambiguity: the well-defined correspondence between LLM representations and human-interpretable concepts remains unclear. This lack of theoretical grounding gives rise to several methodological challenges, including difficulties in principled method design and evaluation criteria. In this work, we show that, under mild assumptions, LLM representations can be approximated as a {linear mixture} of the log-posteriors over concepts given the input context, through the lens of a latent variable model where concepts are treated as latent variables. This motivates a principled framework for concept extraction, namely Concept Component Analysis (ConCA), which aims to recover the log-posterior of each concept from LLM representations through a {unsupervised} linear unmixing process. We explore a specific variant, termed sparse ConCA, which leverages a sparsity prior to address the inherent ill-posedness of the unmixing problem. We implement 12 sparse ConCA variants and demonstrate their ability to extract meaningful concepts across multiple LLMs, offering theory-backed advantages over SAEs.
概念成分分析:一种用于大语言模型概念提取的原则性方法 / Concept Component Analysis: A Principled Approach for Concept Extraction in LLMs
这篇论文提出了一种名为‘概念成分分析’的新方法,它基于一个理论模型,通过线性分解大语言模型的内部表示来提取人类可理解的概念,从而解决了现有方法缺乏理论依据的难题。
源自 arXiv: 2601.20420