菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-16
📄 Abstract - From Tokens to Steps: Verification-Aware Speculative Decoding for Efficient Multi-Step Reasoning

Speculative decoding (SD) accelerates large language model inference by allowing a lightweight draft model to propose outputs that a stronger target model verifies. However, its token-centric nature allows erroneous steps to propagate. Prior approaches mitigate this using external reward models, but incur additional latency, computational overhead, and limit generalizability. We propose SpecGuard, a verification-aware speculative decoding framework that performs step-level verification using only model-internal signals. At each step, SpecGuard samples multiple draft candidates and selects the most consistent step, which is then validated using an ensemble of two lightweight model-internal signals: (i) an attention-based grounding score that measures attribution to the input and previously accepted steps, and (ii) a log-probability-based score that captures token-level confidence. These signals jointly determine whether a step is accepted or recomputed using the target, allocating compute selectively. Experiments across a range of reasoning benchmarks show that SpecGuard improves accuracy by 3.6% while reducing latency by ~11%, outperforming both SD and reward-guided SD.

顶级标签: llm model training systems
详细标签: speculative decoding reasoning verification inference acceleration step-level verification 或 搜索:

从令牌到步骤:面向验证的推测解码以实现高效多步推理 / From Tokens to Steps: Verification-Aware Speculative Decoding for Efficient Multi-Step Reasoning


1️⃣ 一句话总结

这篇论文提出了一种名为SpecGuard的新方法,它通过利用模型内部信号对推理步骤进行整体验证和选择,在提高大型语言模型多步推理准确率的同时,还降低了计算延迟。

源自 arXiv: 2604.15244