菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution

Chain-of-thought (CoT) reasoning sometimes fails to faithfully reflect the true computation of a large language model (LLM), hampering its utility in explaining how LLMs arrive at their answers. Moreover, optimizing for faithfulness and interpretability in reasoning often degrades task performance. To address this tradeoff and improve CoT faithfulness, we propose Reasoning Execution by Multiple Listeners (REMUL), a multi-party reinforcement learning approach. REMUL builds on the hypothesis that reasoning traces which other parties can follow will be more faithful. A speaker model generates a reasoning trace, which is truncated and passed to a pool of listener models who "execute" the trace, continuing the trace to an answer. Speakers are rewarded for producing reasoning that is clear to listeners, with additional correctness regularization via masked supervised finetuning to counter the tradeoff between faithfulness and performance. On multiple reasoning benchmarks (BIG-Bench Extra Hard, MuSR, ZebraLogicBench, and FOLIO), REMUL consistently and substantially improves three measures of faithfulness -- hint attribution, early answering area over the curve (AOC), and mistake injection AOC -- while also improving accuracy. Our analysis finds that these gains are robust across training domains, translate to legibility gains, and are associated with shorter and more direct CoTs.

顶级标签: llm model training theory
详细标签: chain-of-thought faithful reasoning reinforcement learning interpretability multi-agent learning 或 搜索:

通过多听众软执行在推理中平衡忠实性与性能 / Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution


1️⃣ 一句话总结

这篇论文提出了一种名为REMUL的新方法,通过让多个‘听众’模型来验证和‘执行’一个‘主讲’模型的推理过程,从而在提升大语言模型推理过程可解释性和忠实度的同时,也提高了其最终答案的准确性,有效解决了以往方法中忠实度与性能难以兼得的难题。

源自 arXiv: 2602.16154