菜单

🤖 系统
📄 Abstract - How Far Are We from Genuinely Useful Deep Research Agents?

Deep Research Agents (DRAs) aim to automatically produce analyst-level reports through iterative information retrieval and synthesis. However, most existing DRAs were validated on question-answering benchmarks, while research on generating comprehensive reports remains overlooked. Worse, current benchmarks for report synthesis suffer from task complexity and subjective metrics -- this fails to reflect user demands and limits the practical utility of generated reports. To address these gaps, we present Fine-grained DEepResearch bench (FINDER), an enhanced benchmark consisting of 100 human-curated research tasks with 419 structured checklist items that standardize report structure, analytical depth, and factual grounding. Based on approximately 1,000 reports produced by mainstream DRAs, we further propose Deep rEsearch Failure Taxonomy (DEFT), the first failure taxonomy for deep research agents. DEFT contains 14 fine-grained failure modes across reasoning, retrieval, and generation, and is built upon grounded theory with human-LLM co-annotating and inter-annotator reliability validation. Our experimental findings reveal that current DRAs struggle not with task comprehension but with evidence integration, verification, and reasoning-resilient planning.

顶级标签: agents benchmark model evaluation
详细标签: deep research agents report synthesis failure taxonomy evaluation benchmark retrieval-augmented generation 或 搜索:

我们距离真正有用的深度研究智能体还有多远? / How Far Are We from Genuinely Useful Deep Research Agents?


1️⃣ 一句话总结

这篇论文通过建立一个包含结构化检查项的新评估标准和对主流研究智能体生成报告的失败模式分析,发现当前自动研究智能体的主要瓶颈不在于理解任务,而在于整合证据、验证事实和制定稳健的推理计划。


📄 打开原文 PDF