菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Learning to Recall with Transformers Beyond Orthogonal Embeddings

Modern large language models (LLMs) excel at tasks that require storing and retrieving knowledge, such as factual recall and question answering. Transformers are central to this capability because they can encode information during training and retrieve it at inference. Existing theoretical analyses typically study transformers under idealized assumptions such as infinite data or orthogonal embeddings. In realistic settings, however, models are trained on finite datasets with non-orthogonal (random) embeddings. We address this gap by analyzing a single-layer transformer with random embeddings trained with (empirical) gradient descent on a simple token-retrieval task, where the model must identify an informative token within a length-$L$ sequence and learn a one-to-one mapping from tokens to labels. Our analysis tracks the ``early phase'' of gradient descent and yields explicit formulas for the model's storage capacity -- revealing a multiplicative dependence between sample size $N$, embedding dimension $d$, and sequence length $L$. We validate these scalings numerically and further complement them with a lower bound for the underlying statistical problem, demonstrating that this multiplicative scaling is intrinsic under non-orthogonal embeddings.

顶级标签: llm theory model training
详细标签: transformers memory retrieval gradient descent theoretical analysis capacity scaling 或 搜索:

超越正交嵌入:基于Transformer的记忆学习研究 / Learning to Recall with Transformers Beyond Orthogonal Embeddings


1️⃣ 一句话总结

这篇论文通过分析在非正交随机嵌入下训练的简单Transformer模型,揭示了其记忆能力(即存储和检索信息的能力)取决于样本数量、嵌入维度和序列长度三者的乘积关系,并证明这种关系是此类模型在现实有限数据场景下的固有特性。

源自 arXiv: 2603.15923