Transformer模型中的涌现类比推理能力 / Emergent Analogical Reasoning in Transformers
1️⃣ 一句话总结
这项研究揭示了Transformer模型如何通过几何对齐和内部映射机制,像人类一样在不同领域间进行类比推理,并且发现这种能力对数据、训练方式和模型规模非常敏感。
Analogy is a central faculty of human intelligence, enabling abstract patterns discovered in one domain to be applied to another. Despite its central role in cognition, the mechanisms by which Transformers acquire and implement analogical reasoning remain poorly understood. In this work, inspired by the notion of functors in category theory, we formalize analogical reasoning as the inference of correspondences between entities across categories. Based on this formulation, we introduce synthetic tasks that evaluate the emergence of analogical reasoning under controlled settings. We find that the emergence of analogical reasoning is highly sensitive to data characteristics, optimization choices, and model scale. Through mechanistic analysis, we show that analogical reasoning in Transformers decomposes into two key components: (1) geometric alignment of relational structure in the embedding space, and (2) the application of a functor within the Transformer. These mechanisms enable models to transfer relational structure from one category to another, realizing analogy. Finally, we quantify these effects and find that the same trends are observed in pretrained LLMs. In doing so, we move analogy from an abstract cognitive notion to a concrete, mechanistically grounded phenomenon in modern neural networks.
Transformer模型中的涌现类比推理能力 / Emergent Analogical Reasoning in Transformers
这项研究揭示了Transformer模型如何通过几何对齐和内部映射机制,像人类一样在不同领域间进行类比推理,并且发现这种能力对数据、训练方式和模型规模非常敏感。
源自 arXiv: 2602.01992