交错头注意力机制 / Interleaved Head Attention
1️⃣ 一句话总结
这篇论文提出了一种名为‘交错头注意力’的新方法,通过让注意力头之间在计算时相互通信,有效解决了传统多头注意力机制在处理多步推理任务时信息不互通的问题,从而提升了大型语言模型在数学解题和复杂信息检索等任务上的表现。
Multi-Head Attention (MHA) is the core computational primitive underlying modern Large Language Models (LLMs). However, MHA suffers from a fundamental linear scaling limitation: $H$ attention heads produce exactly $H$ independent attention matrices, with no communication between heads during attention computation. This becomes problematic for multi-step reasoning, where correct answers depend on aggregating evidence from multiple parts of the context and composing latent token-to-token relations over a chain of intermediate inferences. To address this, we propose Interleaved Head Attention (IHA), which enables cross-head mixing by constructing $P$ pseudo-heads per head (typically $P=H$), where each pseudo query/key/value is a learned linear combination of all $H$ original queries, keys and values respectively. Interactions between pseudo-query and pseudo-key heads induce up to $P^2$ attention patterns per head with modest parameter overhead $\mathcal{O}(H^2P)$. We provide theory showing improved efficiency in terms of number of parameters on the synthetic Polynomial task (IHA uses $\Theta(\sqrt{k}n^2)$ parameters vs. $\Theta(kn^2)$ for MHA) and on the synthetic order-sensitive CPM-3 task (IHA uses $\lceil\sqrt{N_{\max}}\rceil$ heads vs. $N_{\max}$ for MHA). On real-world benchmarks, IHA improves Multi-Key retrieval on RULER by 10-20% (4k-16k) and, after fine-tuning for reasoning on OpenThoughts, improves GSM8K by 5.8% and MATH-500 by 2.8% (Majority Vote) over full attention.
交错头注意力机制 / Interleaved Head Attention
这篇论文提出了一种名为‘交错头注意力’的新方法,通过让注意力头之间在计算时相互通信,有效解决了传统多头注意力机制在处理多步推理任务时信息不互通的问题,从而提升了大型语言模型在数学解题和复杂信息检索等任务上的表现。
源自 arXiv: 2602.21371