菜单

🤖 系统
📄 Abstract - The Strong Lottery Ticket Hypothesis for Multi-Head Attention Mechanisms

The strong lottery ticket hypothesis (SLTH) conjectures that high-performing subnetworks, called strong lottery tickets (SLTs), are hidden in randomly initialized neural networks. Although recent theoretical studies have established the SLTH across various neural architectures, the SLTH for transformer architectures still lacks theoretical understanding. In particular, the current theory of the SLTH does not yet account for the multi-head attention (MHA) mechanism, a core component of transformers. To address this gap, we introduce a theoretical analysis of the existence of SLTs within MHAs. We prove that, if a randomly initialized MHA of $H$ heads and input dimension $d$ has the hidden dimension $O(d\log(Hd^{3/2}))$ for the key and value, it contains an SLT that approximates an arbitrary MHA with the same input dimension with high probability. Furthermore, by leveraging this theory for MHAs, we extend the SLTH to transformers without normalization layers. We empirically validate our theoretical findings, demonstrating that the approximation error between the SLT within a source model (MHA and transformer) and an approximate target counterpart decreases exponentially by increasing the hidden dimension of the source model.

顶级标签: theory model training machine learning
详细标签: lottery ticket hypothesis attention mechanisms transformer theory neural network pruning theoretical analysis 或 搜索:

📄 论文总结

多头注意力机制的强彩票假说 / The Strong Lottery Ticket Hypothesis for Multi-Head Attention Mechanisms


1️⃣ 一句话总结

这篇论文证明了在随机初始化的多头注意力网络和Transformer中,无需训练即可找到性能优秀的子网络,从而近似实现任意目标网络的功能。


📄 打开原文 PDF