大语言模型中情感表征的潜在结构 / Latent Structure of Affective Representations in Large Language Models
1️⃣ 一句话总结
这项研究通过几何数据分析发现,大语言模型学习到的情感表征在潜在空间中具有与心理学中经典情感模型(如效价-唤醒度)一致的结构,并且这种结构虽然非线性但能被线性近似,这为提升模型的可解释性和安全性提供了实证基础。
The geometric structure of latent representations in large language models (LLMs) is an active area of research, driven in part by its implications for model transparency and AI safety. Existing literature has focused mainly on general geometric and topological properties of the learnt representations, but due to a lack of ground-truth latent geometry, validating the findings of such approaches is challenging. Emotion processing provides an intriguing testbed for probing representational geometry, as emotions exhibit both categorical organization and continuous affective dimensions, which are well-established in the psychology literature. Moreover, understanding such representations carries safety relevance. In this work, we investigate the latent structure of affective representations in LLMs using geometric data analysis tools. We present three main findings. First, we show that LLMs learn coherent latent representations of affective emotions that align with widely used valence--arousal models from psychology. Second, we find that these representations exhibit nonlinear geometric structure that can nonetheless be well-approximated linearly, providing empirical support for the linear representation hypothesis commonly assumed in model transparency methods. Third, we demonstrate that the learned latent representation space can be leveraged to quantify uncertainty in emotion processing tasks. Our findings suggest that LLMs acquire affective representations with geometric structure paralleling established models of human emotion, with practical implications for model interpretability and safety.
大语言模型中情感表征的潜在结构 / Latent Structure of Affective Representations in Large Language Models
这项研究通过几何数据分析发现,大语言模型学习到的情感表征在潜在空间中具有与心理学中经典情感模型(如效价-唤醒度)一致的结构,并且这种结构虽然非线性但能被线性近似,这为提升模型的可解释性和安全性提供了实证基础。
源自 arXiv: 2604.07382