语言模型的物理学:第4.1部分,架构设计与Canon层的魔力 / Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers
1️⃣ 一句话总结
这篇论文提出了一种名为“Canon层”的新型轻量级神经网络组件,它能有效增强语言模型在相邻词语间的信息流动,从而显著提升模型的推理深度、知识处理等核心能力,甚至能让一些较弱的模型架构达到先进模型的性能水平。
Understanding architectural differences in language models is challenging, especially at academic-scale pretraining (e.g., 1.3B parameters, 100B tokens), where results are often dominated by noise and randomness. To overcome this, we introduce controlled synthetic pretraining tasks that isolate and evaluate core model capabilities. Within this framework, we discover CANON LAYERS: lightweight architectural components -- named after the musical term "canon" -- that promote horizontal information flow across neighboring tokens. Canon layers compute weighted sums of nearby token representations and integrate seamlessly into Transformers, linear attention, state-space models, or any sequence architecture. We present 12 key results. This includes how Canon layers enhance reasoning depth (e.g., by $2\times$), reasoning breadth, knowledge manipulation, etc. They lift weak architectures like NoPE to match RoPE, and linear attention to rival SOTA linear models like Mamba2/GDN -- validated both through synthetic tasks and real-world academic-scale pretraining. This synthetic playground offers an economical, principled path to isolate core model capabilities often obscured at academic scales. Equipped with infinite high-quality data, it may even PREDICT how future architectures will behave as training pipelines improve -- e.g., through better data curation or RL-based post-training -- unlocking deeper reasoning and hierarchical inference.
语言模型的物理学:第4.1部分,架构设计与Canon层的魔力 / Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers
这篇论文提出了一种名为“Canon层”的新型轻量级神经网络组件,它能有效增强语言模型在相邻词语间的信息流动,从而显著提升模型的推理深度、知识处理等核心能力,甚至能让一些较弱的模型架构达到先进模型的性能水平。
源自 arXiv: 2512.17351