📄
Abstract - Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling
Every Transformer architecture dedicates enormous capacity to learning rich representations in semantic embedding space -- yet the rotation manifold acted upon by Rotary Positional Embeddings (RoPE) has been treated as a fixed, hand-crafted structure, populated only by discrete ordinal indices. We argue that this rotation space is a largely overlooked second dimension of expressivity in the attention mechanism, one whose systematic exploration may open a new door for attention-based architectures. The analogy to complex numbers is instructive: just as introducing the imaginary axis -- orthogonal to and independent of the real line -- unlocked new algebraic structure once believed impossible, treating the rotation manifold as a learnable, signal-conditioned space opens an orthogonal degree of freedom in attention. In this framing, the token embedding encodes the semantic (real) component of a representation -- what a token means -- while the rotation encodes its dynamic (imaginary) component -- how it relates to every other token across time, position, and context. We introduce SIREN-RoPE, a concrete instantiation of this idea, which populates the rotation dimension with heterogeneous signals -- continuous timestamps, cyclical temporal patterns, and categorical metadata -- via a dual-branch Sinusoidal Representation Network (SIREN). As a proof of concept, we evaluate on a production-scale news feed dataset from a major social network using a generative recommender as the ranking model, demonstrating that activating this hidden dimension yields consistent improvements across calibration and ranking objectives with negligible computational overhead. We invite the community to view the rotation space not as a solved positional-encoding detail, but as an untapped axis whose rich structure may prove as consequential for attention as the imaginary unit proved for algebra.
学习旋转:面向序列建模的时间与语义旋转编码 /
Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling
1️⃣ 一句话总结
本文提出一种新方法,将Transformer中原本固定不变的旋转位置编码(RoPE)改造为可学习的、由输入信号动态驱动的旋转空间,通过引入一个双分支神经网络(SIREN)来编码时间戳、周期模式和元数据等语义信息,从而在不增加太多计算负担的情况下显著提升推荐系统的排序与校准效果。