菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - Position-Aware Sequential Attention for Accurate Next Item Recommendations

Sequential self-attention models usually rely on additive positional embeddings, which inject positional information into item representations at the input. In the absence of positional signals, the attention block is permutation-equivariant over sequence positions and thus has no intrinsic notion of temporal order beyond causal masking. We argue that additive positional embeddings make the attention mechanism only superficially sensitive to sequence order: positional information is entangled with item embedding semantics, propagates weakly in deep architectures, and limits the ability to capture rich sequential patterns. To address these limitations, we introduce a kernelized self-attention mechanism, where a learnable positional kernel operates purely in the position space, disentangled from semantic similarity, and directly modulates attention weights. When applied per attention block, this kernel enables adaptive multi-scale sequential modeling. Experiments on standard next-item prediction benchmarks show that our positional kernel attention consistently improves over strong competing baselines.

顶级标签: machine learning model training natural language processing
详细标签: sequential recommendation self-attention positional encoding kernel methods next-item prediction 或 搜索:

面向精准下一项推荐的位置感知序列注意力机制 / Position-Aware Sequential Attention for Accurate Next Item Recommendations


1️⃣ 一句话总结

这篇论文提出了一种新的注意力机制,通过一个专门学习位置关系的独立模块来直接调整注意力权重,从而更有效地捕捉用户行为序列中的时间顺序模式,显著提升了下一项推荐的准确性。

源自 arXiv: 2602.21052