菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - Reinforced Fast Weights with Next-Sequence Prediction

Fast weight architectures offer a promising alternative to attention-based transformers for long-context modeling by maintaining constant memory overhead regardless of context length. However, their potential is limited by the next-token prediction (NTP) training paradigm. NTP optimizes single-token predictions and ignores semantic coherence across multiple tokens following a prefix. Consequently, fast weight models, which dynamically update their parameters to store contextual information, learn suboptimal representations that fail to capture long-range dependencies. We introduce REFINE (Reinforced Fast weIghts with Next sEquence prediction), a reinforcement learning framework that trains fast weight models under the next-sequence prediction (NSP) objective. REFINE selects informative token positions based on prediction entropy, generates multi-token rollouts, assigns self-supervised sequence-level rewards, and optimizes the model with group relative policy optimization (GRPO). REFINE is applicable throughout the training lifecycle of pre-trained language models: mid-training, post-training, and test-time training. Our experiments on LaCT-760M and DeltaNet-1.3B demonstrate that REFINE consistently outperforms supervised fine-tuning with NTP across needle-in-a-haystack retrieval, long-context question answering, and diverse tasks in LongBench. REFINE provides an effective and versatile framework for improving long-context modeling in fast weight architectures.

顶级标签: llm model training natural language processing
详细标签: fast weight architectures long-context modeling reinforcement learning next-sequence prediction policy optimization 或 搜索:

基于下一序列预测的强化快速权重 / Reinforced Fast Weights with Next-Sequence Prediction


1️⃣ 一句话总结

这篇论文提出了一个名为REFINE的强化学习框架,通过训练模型预测整个后续序列而非单个词,有效解决了现有快速权重模型在长文本理解中语义连贯性不足的问题,从而显著提升了其在多种长上下文任务上的性能。

源自 arXiv: 2602.16704