菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Speculative Speculative Decoding

Autoregressive decoding is bottlenecked by its sequential nature. Speculative decoding has become a standard way to accelerate inference by using a fast draft model to predict upcoming tokens from a slower target model, and then verifying them in parallel with a single target model forward pass. However, speculative decoding itself relies on a sequential dependence between speculation and verification. We introduce speculative speculative decoding (SSD) to parallelize these operations. While a verification is ongoing, the draft model predicts likely verification outcomes and prepares speculations pre-emptively for them. If the actual verification outcome is then in the predicted set, a speculation can be returned immediately, eliminating drafting overhead entirely. We identify three key challenges presented by speculative speculative decoding, and suggest principled methods to solve each. The result is Saguaro, an optimized SSD algorithm. Our implementation is up to 2x faster than optimized speculative decoding baselines and up to 5x faster than autoregressive decoding with open source inference engines.

顶级标签: llm model training systems
详细标签: speculative decoding inference acceleration parallel verification autoregressive models optimization 或 搜索:

推测式推测解码 / Speculative Speculative Decoding


1️⃣ 一句话总结

这篇论文提出了一种名为‘推测式推测解码’的新方法,通过让模型在验证当前预测的同时,提前准备多种可能的后续预测,从而进一步并行化推理过程,将大语言模型的生成速度在现有加速技术基础上再提升最多2倍。

源自 arXiv: 2603.03251