菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-16
📄 Abstract - RePo: Language Models with Context Re-Positioning

In-context learning is fundamental to modern Large Language Models (LLMs); however, prevailing architectures impose a rigid and fixed contextual structure by assigning linear or constant positional indices. Drawing on Cognitive Load Theory (CLT), we argue that this uninformative structure increases extraneous cognitive load, consuming finite working memory capacity that should be allocated to deep reasoning and attention allocation. To address this, we propose RePo, a novel mechanism that reduces extraneous load via context re-positioning. Unlike standard approaches, RePo utilizes a differentiable module, $f_\phi$, to assign token positions that capture contextual dependencies, rather than replying on pre-defined integer range. By continually pre-training on the OLMo-2 1B backbone, we demonstrate that RePo significantly enhances performance on tasks involving noisy contexts, structured data, and longer context length, while maintaining competitive performance on general short-context tasks. Detailed analysis reveals that RePo successfully allocate higher attention to distant but relevant information, assign positions in dense and non-linear space, and capture the intrinsic structure of the input context. Our code is available at this https URL.

顶级标签: llm model training natural language processing
详细标签: in-context learning positional encoding attention allocation cognitive load theory context re-positioning 或 搜索:

RePo:具有上下文重定位能力的语言模型 / RePo: Language Models with Context Re-Positioning


1️⃣ 一句话总结

这篇论文提出了一种名为RePo的新方法,它通过一个可学习的模块动态调整输入文本中词语的位置编码,从而帮助语言模型更有效地处理复杂或混乱的上下文信息,提升其在长文本、含噪声数据等任务上的推理能力。


源自 arXiv: 2512.14391