菜单

🤖 系统
📄 Abstract - The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute

We revisit test-time scaling for language model reasoning and ask a fundamental question: at equal token budget and compute, is it better to run multiple independent chains in parallel, or to run fewer chains that iteratively refine through sequential steps? Through comprehensive evaluation across 5 state-of-the-art open source models and 3 challenging reasoning benchmarks, we find that sequential scaling where chains explicitly build upon previous attempts consistently outperforms the dominant parallel self-consistency paradigm in 95.6% of configurations with gains in accuracy upto 46.7%. Further, we introduce inverse-entropy weighted voting, a novel training-free method to further boost the accuracy of sequential scaling. By weighing answers in proportion to the inverse entropy of their reasoning chains, we increase our success rate over parallel majority and establish it as the optimal test-time scaling strategy. Our findings fundamentally challenge the parallel reasoning orthodoxy that has dominated test-time scaling since Wang et al.'s self-consistency decoding (Wang et al., 2022), positioning sequential refinement as the robust default for modern LLM reasoning and necessitating a paradigm shift in how we approach inference-time optimization.

顶级标签: llm model evaluation theory
详细标签: reasoning test-time scaling sequential refinement voting methods inference optimization 或 搜索:

📄 论文总结

顺序优势:在同等计算量下,逆熵投票优于并行自一致性方法 / The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute


1️⃣ 一句话总结

这项研究发现,在相同计算资源下,让语言模型通过顺序迭代改进答案的方法,比同时运行多个独立推理链的并行方法更有效,并通过一种新的逆熵加权投票技术进一步提升了准确率。


📄 打开原文 PDF