菜单

🤖 系统
📄 Abstract - ORION: Teaching Language Models to Reason Efficiently in the Language of Thought

Large Reasoning Models (LRMs) achieve strong performance in mathematics, code generation, and task planning, but their reliance on long chains of verbose "thinking" tokens leads to high latency, redundancy, and incoherent reasoning paths. Inspired by the Language of Thought Hypothesis, which posits that human reasoning operates over a symbolic, compositional mental language called Mentalese, we introduce a framework that trains models to reason in a similarly compact style. Mentalese encodes abstract reasoning as ultra-compressed, structured tokens, enabling models to solve complex problems with far fewer steps. To improve both efficiency and accuracy, we propose SHORTER LENGTH PREFERENCE OPTIMIZATION (SLPO), a reinforcement learning method that rewards concise solutions that stay correct, while still allowing longer reasoning when needed. Applied to Mentalese-aligned models, SLPO yields significantly higher compression rates by enabling concise reasoning that preserves the benefits of detailed thinking without the computational overhead. Across benchmarks including AIME 2024 and 2025, MinervaMath, OlympiadBench, Math500, and AMC, our ORION models produce reasoning traces with 4-16x fewer tokens, achieve up to 5x lower inference latency, and reduce training costs by 7-9x relative to the DeepSeek R1 Distilled model, while maintaining 90-98% of its accuracy. ORION also surpasses Claude and ChatGPT-4o by up to 5% in accuracy while maintaining 2x compression. These results show that Mentalese-style compressed reasoning offers a step toward human-like cognitive efficiency, enabling real-time, cost-effective reasoning without sacrificing accuracy.

顶级标签: llm model training theory
详细标签: reasoning efficiency language of thought compressed reasoning reinforcement learning latency reduction 或 搜索:

ORION:教导语言模型以思维语言进行高效推理 / ORION: Teaching Language Models to Reason Efficiently in the Language of Thought


1️⃣ 一句话总结

这篇论文提出了一个名为ORION的新框架,它通过训练模型使用一种类似人类‘思维语言’的压缩、结构化符号进行推理,从而在保持高准确率的同时,大幅减少了计算所需的步骤和成本,实现了更高效、更快速的AI推理。


📄 打开原文 PDF