深度信息合成基准测试 / A Benchmark for Deep Information Synthesis
1️⃣ 一句话总结
这篇论文提出了一个名为DEEPSYNTH的新基准测试,用于评估AI智能体在需要从多来源收集、综合信息并进行复杂推理的现实任务中的能力,结果表明当前最先进的模型在此类任务上表现仍然不佳。
Large language model (LLM)-based agents are increasingly used to solve complex tasks involving tool use, such as web browsing, code execution, and data analysis. However, current evaluation benchmarks do not adequately assess their ability to solve real-world tasks that require synthesizing information from multiple sources and inferring insights beyond simple fact retrieval. To address this, we introduce DEEPSYNTH, a novel benchmark designed to evaluate agents on realistic, time-consuming problems that combine information gathering, synthesis, and structured reasoning to produce insights. DEEPSYNTH contains 120 tasks collected across 7 domains and data sources covering 67 countries. DEEPSYNTH is constructed using a multi-stage data collection pipeline that requires annotators to collect official data sources, create hypotheses, perform manual analysis, and design tasks with verifiable answers. When evaluated on DEEPSYNTH, 11 state-of-the-art LLMs and deep research agents achieve a maximum F1 score of 8.97 and 17.5 on the LLM-judge metric, underscoring the difficulty of the benchmark. Our analysis reveals that current agents struggle with hallucinations and reasoning over large information spaces, highlighting DEEPSYNTH as a crucial benchmark for guiding future research.
深度信息合成基准测试 / A Benchmark for Deep Information Synthesis
这篇论文提出了一个名为DEEPSYNTH的新基准测试,用于评估AI智能体在需要从多来源收集、综合信息并进行复杂推理的现实任务中的能力,结果表明当前最先进的模型在此类任务上表现仍然不佳。
源自 arXiv: 2602.21143