菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-03
📄 Abstract - Conformal Thinking: Risk Control for Reasoning on a Compute Budget

Reasoning Large Language Models (LLMs) enable test-time scaling, with dataset-level accuracy improving as the token budget increases, motivating adaptive reasoning -- spending tokens when they improve reliability and stopping early when additional computation is unlikely to help. However, setting the token budget, as well as the threshold for adaptive reasoning, is a practical challenge that entails a fundamental risk-accuracy trade-off. We re-frame the budget setting problem as risk control, limiting the error rate while minimizing compute. Our framework introduces an upper threshold that stops reasoning when the model is confident (risking incorrect output) and a novel parametric lower threshold that preemptively stops unsolvable instances (risking premature stoppage). Given a target risk and a validation set, we use distribution-free risk control to optimally specify these stopping mechanisms. For scenarios with multiple budget controlling criteria, we incorporate an efficiency loss to select the most computationally efficient exiting mechanism. Empirical results across diverse reasoning tasks and models demonstrate the effectiveness of our risk control approach, demonstrating computational efficiency gains from the lower threshold and ensemble stopping mechanisms while adhering to the user-specified risk target.

顶级标签: llm model evaluation theory
详细标签: risk control adaptive computation conformal prediction reasoning efficiency early exiting 或 搜索:

保形思维:计算预算下推理的风险控制 / Conformal Thinking: Risk Control for Reasoning on a Compute Budget


1️⃣ 一句话总结

这篇论文提出了一种新方法,让大语言模型在回答问题时能自动决定何时停止思考,从而在保证错误率不超限的前提下,最大限度地节省计算资源。

源自 arXiv: 2602.03814