学习草拟:基于强化学习的自适应推测解码 / Learning to Draft: Adaptive Speculative Decoding with Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一种名为‘学习草拟’的新方法,它利用强化学习训练两个相互适应的策略来动态协调大语言模型的草拟和验证阶段,从而直接优化整体解码速度,在多种任务上实现了比现有最佳方法高出最多36.4%的加速效果。
Speculative decoding accelerates large language model (LLM) inference by using a small draft model to generate candidate tokens for a larger target model to verify. The efficacy of this technique hinges on the trade-off between the time spent on drafting candidates and verifying them. However, current state-of-the-art methods rely on a static time allocation, while recent dynamic approaches optimize for proxy metrics like acceptance length, often neglecting the true time cost and treating the drafting and verification phases in isolation. To address these limitations, we introduce Learning to Draft (LTD), a novel method that directly optimizes for throughput of each draft-and-verify cycle. We formulate the problem as a reinforcement learning environment and train two co-adaptive policies to dynamically coordinate the draft and verification phases. This encourages the policies to adapt to each other and explicitly maximize decoding efficiency. We conducted extensive evaluations on five diverse LLMs and four distinct tasks. Our results show that LTD achieves speedup ratios ranging from 2.24x to 4.32x, outperforming the state-of-the-art method Eagle3 up to 36.4%.
学习草拟:基于强化学习的自适应推测解码 / Learning to Draft: Adaptive Speculative Decoding with Reinforcement Learning
这篇论文提出了一种名为‘学习草拟’的新方法,它利用强化学习训练两个相互适应的策略来动态协调大语言模型的草拟和验证阶段,从而直接优化整体解码速度,在多种任务上实现了比现有最佳方法高出最多36.4%的加速效果。
源自 arXiv: 2603.01639