菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-09
📄 Abstract - Over-Searching in Search-Augmented Large Language Models

Search-augmented large language models (LLMs) excel at knowledge-intensive tasks by integrating external retrieval. However, they often over-search -- unnecessarily invoking search tool even when it does not improve response quality, which leads to computational inefficiency and hallucinations by incorporating irrelevant context. In this work, we conduct a systematic evaluation of over-searching across multiple dimensions, including query types, model categories, retrieval conditions, and multi-turn conversations. Our finding shows: (i) search generally improves answer accuracy on answerable queries but harms abstention on unanswerable ones; (ii) over-searching is more pronounced in complex reasoning models and deep research systems, is exacerbated by noisy retrieval, and compounds across turns in multi-turn conversations; and (iii) the composition of retrieved evidence is crucial, as the presence of negative evidence improves abstention. To quantify over-searching, we introduce Tokens Per Correctness (TPC), an evaluation metric that captures the performance-cost trade-off for search-augmented LLMs. Lastly, we investigate mitigation approaches at both the query and retrieval levels and release the OverSearchQA to foster continued research into efficient search-augmented LLMs.

顶级标签: llm model evaluation systems
详细标签: retrieval-augmented generation efficiency hallucination benchmark tool usage 或 搜索:

检索增强大语言模型中的过度搜索问题 / Over-Searching in Search-Augmented Large Language Models


1️⃣ 一句话总结

这篇论文研究发现,检索增强大语言模型存在‘过度搜索’问题,即模型会不必要地调用外部搜索工具,这不仅浪费算力还可能导致错误答案,作者通过系统评估揭示了该问题的成因与影响,并提出了新的衡量指标和缓解方法。

源自 arXiv: 2601.05503