菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-05
📄 Abstract - LYNX: Learning Dynamic Exits for Confidence-Controlled Reasoning

Large reasoning models achieve strong performance on complex tasks by generating extended chains of thought, but they often "overthink": continuing to reason long after they have enough information to answer correctly. This wastes inference-time compute and can hurt accuracy. Existing attempts to stop early either manipulate decoding with extra sampling and heuristics, rely on auxiliary verifier models, or operate only as post-hoc analysis pipelines without formal guarantees. We introduce LYNX, an online early-exit mechanism that turns a model's own hidden-state awareness into confidence-controlled stopping decisions. LYNX attaches exit decisions to naturally occurring reasoning cues (e.g., "hmm", "wait") during generation, trains a lightweight probe on hidden states at those cue tokens using supervision from forced exits, and wraps the resulting scores in split conformal prediction to obtain distribution-free control over premature exits. Crucially, we train and calibrate this probe once on a generic mathematical corpus and reuse it unchanged across benchmarks, decoding temperatures, and even non-mathematical tasks. Across three model families spanning 1.5B to 32B parameters, a single mathematically trained probe per base model yields strong accuracy--efficiency tradeoffs. On GSM8K, LYNX matches or improves baseline accuracy while reducing tokens by 40--65\%; on MATH-500 it improves accuracy by up to 12 points with roughly 35--60\% fewer tokens; on AIME 2024 it recovers baseline accuracy with more than 50\% token savings; and on CommonsenseQA, a non-math benchmark, it transfers zero-shot with modest accuracy gains and up to 70\% fewer tokens. Compared to state-of-the-art early-exit methods, LYNX offers competitive or superior Pareto frontiers while remaining fully online, requiring no proxy models at inference, and providing explicit, user-tunable confidence guarantees.

顶级标签: llm model training model evaluation
详细标签: early-exit confidence calibration reasoning efficiency conformal prediction dynamic inference 或 搜索:

LYNX:用于置信度控制推理的动态出口学习 / LYNX: Learning Dynamic Exits for Confidence-Controlled Reasoning


1️⃣ 一句话总结

这篇论文提出了一种名为LYNX的新方法,它能让大型推理模型在生成答案时‘聪明地提前停止’,即通过分析模型内部的隐藏状态来判断何时已有足够信心得出正确结论,从而在保持甚至提高准确率的同时,大幅减少计算开销和生成时间。


源自 arXiv: 2512.05325