在高阶和块对角线性循环网络中改进状态混合 / Improved state mixing in higher-order and block diagonal linear recurrent networks
1️⃣ 一句话总结
这篇论文提出了两种新型线性循环网络结构,通过让网络状态在时间和通道维度上更充分地混合,在保持计算效率的同时显著提升了模型处理长序列的能力。
Linear recurrent networks (LRNNs) and linear state space models (SSMs) promise computational and memory efficiency on long-sequence modeling tasks, yet their diagonal state transitions limit expressivity. Dense and nonlinear architectures (e.g., LSTMs) on the other hand are provably more expressive, but computationally costly. Here, we explore how expressivity in LRNNs can be increased via richer state mixing across time and channels while maintaining competitive efficiency. Specifically, we introduce two structured LRNN architectures: (i) Higher-order Linear Recurrent Units (H-LRU), which generalize first-order recurrence to higher order, mixing multiple past states, and (ii) Block-Diagonal LRUs (BD-LRU), which enable dense intra-block channel mixing. Per-channel (H-LRU) or per-row (BD-LRU) L1-normalization of selective gates stabilizes training and allows for scaling window/block sizes. A parallel-scan implementation of the proposed architectures keeps the throughput competitive with diagonal LRNNs for moderate orders (H-LRU) and block sizes (BD-LRU). In synthetic sequence modeling tasks, the performance of BD-LRU matches or exceeds those of linear SSMs (Mamba), low-rank LRNNs (DeltaNet) and LSTM baselines, while H-LRU is found to be the most parameter-efficient in compression task. In both synthetic sequence modeling and language modeling, our results indicate that the structure of state mixing rather than width alone shapes expressivity of LRNNs, offering a practical route to closing the efficiency-expressivity gap in linear sequence models.
在高阶和块对角线性循环网络中改进状态混合 / Improved state mixing in higher-order and block diagonal linear recurrent networks
这篇论文提出了两种新型线性循环网络结构,通过让网络状态在时间和通道维度上更充分地混合,在保持计算效率的同时显著提升了模型处理长序列的能力。
源自 arXiv: 2602.12021