菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-31
📄 Abstract - Recursive Language Models

We study allowing large language models (LLMs) to process arbitrarily long prompts through the lens of inference-time scaling. We propose Recursive Language Models (RLMs), a general inference strategy that treats long prompts as part of an external environment and allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the prompt. We find that RLMs successfully handle inputs up to two orders of magnitude beyond model context windows and, even for shorter prompts, dramatically outperform the quality of base LLMs and common long-context scaffolds across four diverse long-context tasks, while having comparable (or cheaper) cost per query.

顶级标签: llm systems model evaluation
详细标签: long-context inference-time scaling recursive models prompt processing context window 或 搜索:

递归语言模型 / Recursive Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为‘递归语言模型’的新方法,它能让大语言模型像编程一样,通过自我调用和分解的方式,高效处理远超其本身能力范围的超长文本,从而大幅提升长文本任务的处理效果。

源自 arXiv: 2512.24601