语言统计中的对称性塑造了模型表征的几何结构 / Symmetry in language statistics shapes the geometry of model representations
1️⃣ 一句话总结
这篇论文发现,语言中词语共现概率的平移对称性(例如,两个月份共现的概率只取决于它们的时间间隔)是导致大语言模型内部表征出现简单几何结构(如月份排成圆形)的根本原因,并且这种结构在数据受到干扰时依然保持稳定。
Although learned representations underlie neural networks' success, their fundamental properties remain poorly understood. A striking example is the emergence of simple geometric structures in LLM representations: for example, calendar months organize into a circle, years form a smooth one-dimensional manifold, and cities' latitudes and longitudes can be decoded by a linear probe. We show that the statistics of language exhibit a translation symmetry -- e.g., the co-occurrence probability of two months depends only on the time interval between them -- and we prove that the latter governs the aforementioned geometric structures in high-dimensional word embedding models. Moreover, we find that these structures persist even when the co-occurrence statistics are strongly perturbed (for example, by removing all sentences in which two months appear together) and at moderate embedding dimension. We show that this robustness naturally emerges if the co-occurrence statistics are collectively controlled by an underlying continuous latent variable. We empirically validate this theoretical framework in word embedding models, text embedding models, and large language models.
语言统计中的对称性塑造了模型表征的几何结构 / Symmetry in language statistics shapes the geometry of model representations
这篇论文发现,语言中词语共现概率的平移对称性(例如,两个月份共现的概率只取决于它们的时间间隔)是导致大语言模型内部表征出现简单几何结构(如月份排成圆形)的根本原因,并且这种结构在数据受到干扰时依然保持稳定。
源自 arXiv: 2602.15029