跨多领域对话式问答的RAG方法综合比较 / Comprehensive Comparison of RAG Methods Across Multi-Domain Conversational QA
1️⃣ 一句话总结
这篇论文通过系统比较多种检索增强生成(RAG)方法在多轮对话问答中的表现,发现简单有效的检索策略(如重排序、混合BM25和HyDE)通常优于复杂方法,其效果关键取决于方法与数据集特性的匹配,而非方法本身的复杂度。
Conversational question answering increasingly relies on retrieval-augmented generation (RAG) to ground large language models (LLMs) in external knowledge. Yet, most existing studies evaluate RAG methods in isolation and primarily focus on single-turn settings. This paper addresses the lack of a systematic comparison of RAG methods for multi-turn conversational QA, where dialogue history, coreference, and shifting user intent substantially complicate retrieval. We present a comprehensive empirical study of vanilla and advanced RAG methods across eight diverse conversational QA datasets spanning multiple domains. Using a unified experimental setup, we evaluate retrieval quality and answer generation using generator and retrieval metrics, and analyze how performance evolves across conversation turns. Our results show that robust yet straightforward methods, such as reranking, hybrid BM25, and HyDE, consistently outperform vanilla RAG. In contrast, several advanced techniques fail to yield gains and can even degrade performance below the No-RAG baseline. We further demonstrate that dataset characteristics and dialogue length strongly influence retrieval effectiveness, explaining why no single RAG strategy dominates across settings. Overall, our findings indicate that effective conversational RAG depends less on method complexity than on alignment between the retrieval strategy and the dataset structure. We publish the code used.\footnote{\href{this https URL}{GitHub Repository}}
跨多领域对话式问答的RAG方法综合比较 / Comprehensive Comparison of RAG Methods Across Multi-Domain Conversational QA
这篇论文通过系统比较多种检索增强生成(RAG)方法在多轮对话问答中的表现,发现简单有效的检索策略(如重排序、混合BM25和HyDE)通常优于复杂方法,其效果关键取决于方法与数据集特性的匹配,而非方法本身的复杂度。
源自 arXiv: 2602.09552