菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-05
📄 Abstract - Cactus: Accelerating Auto-Regressive Decoding with Constrained Acceptance Speculative Sampling

Speculative sampling (SpS) has been successful in accelerating the decoding throughput of auto-regressive large language models by leveraging smaller draft models. SpS strictly enforces the generated distribution to match that of the verifier LLM. This is unnecessarily restrictive as slight variations of the verifier's distribution, such as sampling with top-$k$ or temperature, would also be acceptable. Typical acceptance sampling (TAS) alleviates this issue by accepting more tokens using entropy-based heuristics. However, this approach distorts the verifier distribution, potentially degrading output quality when the verifier encodes critical information. In this work, we formalize the speculative sampling algorithm through the lens of constrained optimization. Based on this formulation, we propose Cactus (constrained acceptance speculative sampling), a method that guarantees controlled divergence from the verifier distribution and increasing acceptance rates. Empirical results across a wide range of benchmarks confirm the effectiveness of our approach.

顶级标签: llm model training systems
详细标签: speculative sampling decoding acceleration constrained optimization auto-regressive models inference efficiency 或 搜索:

Cactus:通过约束接受推测采样加速自回归解码 / Cactus: Accelerating Auto-Regressive Decoding with Constrained Acceptance Speculative Sampling


1️⃣ 一句话总结

这篇论文提出了一种名为Cactus的新方法,它通过一个受约束的优化框架来改进推测采样技术,在保证大模型输出质量基本不变的前提下,显著提升了文本生成的速度。

源自 arXiv: 2604.04987