菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-11-24
📄 Abstract - ThreadWeaver: Adaptive Threading for Efficient Parallel Reasoning in Language Models

Scaling inference-time computation has enabled Large Language Models (LLMs) to achieve strong reasoning performance, but inherently sequential decoding leads to substantial latency, especially on complex tasks. Recent work on adaptive parallel reasoning aims to improve inference efficiency by decomposing the problem-solving process into concurrent reasoning threads when beneficial. However, existing methods on realistic tasks are either limited to supervised behavior cloning or exhibit significant accuracy drops compared to widely-used sequential long chain-of-thought (CoT) baselines. Moreover, many require customized inference engines, complicating deployment. We introduce ThreadWeaver, a framework for adaptive parallel reasoning that achieves accuracy on par with popular sequential reasoning models of comparable size while significantly reducing inference latency. ThreadWeaver's performance stems from three key innovations: 1) a two-stage parallel trajectory generator that produces large-scale, high-quality CoT data with parallel annotations for supervised fine-tuning; 2) a trie-based training-inference co-design that enables parallel reasoning on any off-the-shelf autoregressive inference engine without modifying position embeddings or KV caches; and 3) a parallelization-aware reinforcement learning framework that teaches the model to balance accuracy with effective parallelization. Across six challenging mathematical reasoning benchmarks, ThreadWeaver trained atop Qwen3-8B achieves accuracy comparable to cutting-edge sequential reasoning models (71.9% on average and 79.9% on AIME24) while delivering up to 1.53x average speedup in token latency, establishing a new Pareto frontier between accuracy and efficiency.

顶级标签: llm model training model evaluation
详细标签: parallel reasoning inference efficiency chain-of-thought reinforcement learning mathematical reasoning 或 搜索:

ThreadWeaver:面向语言模型高效并行推理的自适应线程技术 / ThreadWeaver: Adaptive Threading for Efficient Parallel Reasoning in Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为ThreadWeaver的新方法,它能让大语言模型在解决复杂问题时像多线程处理任务一样并行思考,从而在保持与顶尖顺序推理模型相同准确率的同时,显著提升了推理速度,且无需修改现有推理引擎。


源自 arXiv: 2512.07843