菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-23
📄 Abstract - To See the Unseen: on the Generalization Ability of Transformers in Symbolic Reasoning

We investigate the ability of decoder-only transformer models to perform abstract symbolic reasoning; specifically solving propositional logic reasoning problems given in-context. Previous work demonstrated that models fail to generalize to problems involving variable names that were not observed during training, and it was shown that one reason behind this is the difficulty of copying (or generating) unseen tokens. We show both theoretically and empirically that a particular representational collapse also has a crucial role: the unembeddings (last-layer weights) of unseen tokens collapse to nearly the same vector during training. The collapse makes distinguishing multiple unseen variables difficult for the model (especially when the embedding and unembedding parameters are shared), and provides a mechanistic explanation for the effectiveness of existing heuristic interventions like "active forgetting", which periodically reset the token (un)embeddings. Based on these observations, we devise a combination of techniques, involving a small architecture change facilitating copying, data diversity, and freezing or resetting (un)embeddings, that achieves generalization to unseen tokens. We support our claims with extensive controlled experiments on propositional logic reasoning problems. Beyond synthetic experiments, we also observe evidence of (un)embedding collapse in the open-weight models in the Gemma 3 family, which includes 99 unused tokens reserved for downstream use. Empirically we find that the correlated embeddings of these tokens are a poor initialization for finetuning applications.

顶级标签: llm machine learning theory
详细标签: transformers symbolic reasoning generalization representational collapse unembedding 或 搜索:

看见未见过:Transformer在符号推理中的泛化能力研究 / To See the Unseen: on the Generalization Ability of Transformers in Symbolic Reasoning


1️⃣ 一句话总结

该论文揭示了解码器型Transformer模型在处理包含未见过变量名的命题逻辑推理问题时,其泛化失败的关键原因在于模型最后一层权重(解嵌入层)对未见过变量产生了“表示坍塌”——它们几乎被映射到相同的向量,导致模型难以区分不同新变量;基于此发现,作者提出结合架构微调、数据多样性与嵌入层重置等策略,成功实现了对未见过符号的高效泛化。

源自 arXiv: 2604.21632