菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - OpenSeeker: Democratizing Frontier Search Agents by Fully Open-Sourcing Training Data

Deep search capabilities have become an indispensable competency for frontier Large Language Model (LLM) agents, yet the development of high-performance search agents remains dominated by industrial giants due to a lack of transparent, high-quality training data. This persistent data scarcity has fundamentally hindered the progress of the broader research community in developing and innovating within this domain. To bridge this gap, we introduce OpenSeeker, the first fully open-source search agent (i.e., model and data) that achieves frontier-level performance through two core technical innovations: (1) Fact-grounded scalable controllable QA synthesis, which reverse-engineers the web graph via topological expansion and entity obfuscation to generate complex, multi-hop reasoning tasks with controllable coverage and complexity. (2) Denoised trajectory synthesis, which employs a retrospective summarization mechanism to denoise the trajectory, therefore promoting the teacher LLMs to generate high-quality actions. Experimental results demonstrate that OpenSeeker, trained (a single training run) on only 11.7k synthesized samples, achieves state-of-the-art performance across multiple benchmarks including BrowseComp, BrowseComp-ZH, xbench-DeepSearch, and WideSearch. Notably, trained with simple SFT, OpenSeeker significantly outperforms the second-best fully open-source agent DeepDive (e.g., 29.5% v.s. 15.3% on BrowseComp), and even surpasses industrial competitors such as Tongyi DeepResearch (trained via extensive continual pre-training, SFT, and RL) on BrowseComp-ZH (48.4% v.s. 46.7%). We fully open-source the complete training dataset and the model weights to democratize frontier search agent research and foster a more transparent, collaborative ecosystem.

顶级标签: llm agents data
详细标签: search agents training data synthesis multi-hop reasoning open-source benchmark evaluation 或 搜索:

OpenSeeker:通过完全开源训练数据,让前沿搜索智能体技术大众化 / OpenSeeker: Democratizing Frontier Search Agents by Fully Open-Sourcing Training Data


1️⃣ 一句话总结

这篇论文提出了一个名为OpenSeeker的完全开源搜索智能体,它通过创新的数据合成方法,仅用少量数据就达到了行业顶尖的搜索性能,旨在打破大公司垄断,推动该领域研究的开放与协作。

源自 arXiv: 2603.15594