菜单

🤖 系统
📄 Abstract - DoPE: Denoising Rotary Position Embedding

Rotary Position Embedding (RoPE) in Transformer models has inherent limits that weaken length extrapolation. We reinterpret the attention map with positional encoding as a noisy feature map, and propose Denoising Positional Encoding (DoPE), a training-free method based on truncated matrix entropy to detect outlier frequency bands in the feature map. Leveraging the noise characteristics of the feature map, we further reparameterize it with a parameter-free Gaussian distribution to achieve robust extrapolation. Our method theoretically reveals the underlying cause of the attention sink phenomenon and its connection to truncated matrix entropy. Experiments on needle-in-a-haystack and many-shot in-context learning tasks demonstrate that DoPE significantly improves retrieval accuracy and reasoning stability across extended contexts (up to 64K tokens). The results show that the denoising strategy for positional embeddings effectively mitigates attention sinks and restores balanced attention patterns, providing a simple yet powerful solution for improving length generalization. Our project page is Project: this https URL

顶级标签: natural language processing model training theory
详细标签: positional encoding length extrapolation attention mechanism transformer entropy analysis 或 搜索:

📄 论文总结

DoPE:去噪旋转位置编码 / DoPE: Denoising Rotary Position Embedding


1️⃣ 一句话总结

这篇论文提出了一种无需训练的去噪方法DoPE,通过检测并修正位置编码中的异常频率成分,有效解决了Transformer模型在处理长文本时注意力失衡的问题,显著提升了模型在超长上下文中的检索准确性和推理稳定性。


📄 打开原文 PDF