趋同演化:不同语言模型如何学习相似的数值表征 / Convergent Evolution: How Different Language Models Learn Similar Number Representations
1️⃣ 一句话总结
这项研究发现,尽管Transformer、线性RNN、LSTM等不同类型的语言模型训练方式各异,它们都学会用周期为2、5和10的规律来表征数字,但只有部分模型能进一步形成可用于数字分类的几何可分离特征,揭示了模型在学习数值表示时的趋同与分化现象。
Language models trained on natural text learn to represent numbers using periodic features with dominant periods at $T=2, 5, 10$. In this paper, we identify a two-tiered hierarchy of these features: while Transformers, Linear RNNs, LSTMs, and classical word embeddings trained in different ways all learn features that have period-$T$ spikes in the Fourier domain, only some learn geometrically separable features that can be used to linearly classify a number mod-$T$. To explain this incongruity, we prove that Fourier domain sparsity is necessary but not sufficient for mod-$T$ geometric separability. Empirically, we investigate when model training yields geometrically separable features, finding that the data, architecture, optimizer, and tokenizer all play key roles. In particular, we identify two different routes through which models can acquire geometrically separable features: they can learn them from complementary co-occurrence signals in general language data, including text-number co-occurrence and cross-number interaction, or from multi-token (but not single-token) addition problems. Overall, our results highlight the phenomenon of convergent evolution in feature learning: A diverse range of models learn similar features from different training signals.
趋同演化:不同语言模型如何学习相似的数值表征 / Convergent Evolution: How Different Language Models Learn Similar Number Representations
这项研究发现,尽管Transformer、线性RNN、LSTM等不同类型的语言模型训练方式各异,它们都学会用周期为2、5和10的规律来表征数字,但只有部分模型能进一步形成可用于数字分类的几何可分离特征,揭示了模型在学习数值表示时的趋同与分化现象。
源自 arXiv: 2604.20817