菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-07
📄 Abstract - ROI-Reasoning: Rational Optimization for Inference via Pre-Computation Meta-Cognition

Large language models (LLMs) can achieve strong reasoning performance with sufficient computation, but they do not inherently know how much computation a task requires. We study budgeted inference-time reasoning for multiple tasks under a strict global token constraint and formalize it as a Ordered Stochastic Multiple-Choice Knapsack Problem(OS-MCKP). This perspective highlights a meta-cognitive requirement -- anticipating task difficulty, estimating return over investment (ROI), and allocating computation strategically. We propose ROI-Reasoning, a two-stage framework that endows LLMs with intrinsic, budget-aware rationality. In the first stage, Meta-Cognitive Fine-Tuning teaches models to predict reasoning cost and expected utility before generation, enabling explicit solve-or-skip decisions. Next, Rationality-Aware Reinforcement Learning optimizes sequential decision making under a hard token budget, allowing models to learn long-horizon allocation strategies. Across budgeted mathematical reasoning benchmarks, ROI-Reasoning consistently improves overall score while substantially reducing regret under tight computation budgets.

顶级标签: llm model training model evaluation
详细标签: budgeted inference computation allocation meta-cognition reinforcement learning mathematical reasoning 或 搜索:

ROI-推理:通过预计算元认知实现推理的理性优化 / ROI-Reasoning: Rational Optimization for Inference via Pre-Computation Meta-Cognition


1️⃣ 一句话总结

这篇论文提出了一种名为ROI-Reasoning的方法,它通过让大语言模型在生成答案前学会预估任务难度和计算成本,并据此在严格的计算资源限制下智能地选择解决或跳过某些问题,从而在数学推理任务中实现了更高效、更理性的计算资源分配。

源自 arXiv: 2601.03822