PAR$^2$-RAG:用于多跳问答的规划式主动检索与推理框架 / PAR$^2$-RAG: Planned Active Retrieval and Reasoning for Multi-Hop Question Answering
1️⃣ 一句话总结
这篇论文提出了一个名为PAR$^2$-RAG的新框架,通过先广泛搜索再深度精炼的两阶段策略,有效解决了大语言模型在需要跨文档推理的多跳问答任务中容易出错的问题,显著提升了回答准确性和检索质量。
Large language models (LLMs) remain brittle on multi-hop question answering (MHQA), where answering requires combining evidence across documents through retrieval and reasoning. Iterative retrieval systems can fail by locking onto an early low-recall trajectory and amplifying downstream errors, while planning-only approaches may produce static query sets that cannot adapt when intermediate evidence changes. We propose \textbf{Planned Active Retrieval and Reasoning RAG (PAR$^2$-RAG)}, a two-stage framework that separates \emph{coverage} from \emph{commitment}. PAR$^2$-RAG first performs breadth-first anchoring to build a high-recall evidence frontier, then applies depth-first refinement with evidence sufficiency control in an iterative loop. Across four MHQA benchmarks, PAR$^2$-RAG consistently outperforms existing state-of-the-art baselines, compared with IRCoT, PAR$^2$-RAG achieves up to \textbf{23.5\%} higher accuracy, with retrieval gains of up to \textbf{10.5\%} in NDCG.
PAR$^2$-RAG:用于多跳问答的规划式主动检索与推理框架 / PAR$^2$-RAG: Planned Active Retrieval and Reasoning for Multi-Hop Question Answering
这篇论文提出了一个名为PAR$^2$-RAG的新框架,通过先广泛搜索再深度精炼的两阶段策略,有效解决了大语言模型在需要跨文档推理的多跳问答任务中容易出错的问题,显著提升了回答准确性和检索质量。
源自 arXiv: 2603.29085