菜单

🤖 系统
📄 Abstract - LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls

Augmenting Large Language Models (LLMs) with external tools enables them to execute complex, multi-step tasks. However, tool learning is hampered by the static synthetic data pipelines where data generation and model training are executed as two separate, non-interactive processes. This approach fails to adaptively focus on a model's specific weaknesses and allows noisy labels to persist, degrading training efficiency. We introduce LoopTool, a fully automated, model-aware data evolution framework that closes this loop by tightly integrating data synthesis and model training. LoopTool iteratively refines both the data and the model through three synergistic modules: (1) Greedy Capability Probing (GCP) diagnoses the model's mastered and failed capabilities; (2) Judgement-Guided Label Verification (JGLV) uses an open-source judge model to find and correct annotation errors, progressively purifying the dataset; and (3) Error-Driven Data Expansion (EDDE) generates new, challenging samples based on identified failures. This closed-loop process operates within a cost-effective, open-source ecosystem, eliminating dependence on expensive closed-source APIs. Experiments show that our 8B model trained with LoopTool significantly surpasses its 32B data generator and achieves new state-of-the-art results on the BFCL-v3 and ACEBench benchmarks for its scale. Our work demonstrates that closed-loop, self-refining data pipelines can dramatically enhance the tool-use capabilities of LLMs.

顶级标签: llm agents model training
详细标签: tool learning data synthesis closed-loop training capability probing error correction 或 搜索:

📄 论文总结

LoopTool:为鲁棒的大语言模型工具调用实现数据与训练的闭环 / LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls


1️⃣ 一句话总结

这篇论文提出了一个名为LoopTool的自动化框架,通过将数据生成与模型训练紧密结合,不断诊断模型弱点、修正标注错误并针对性生成新数据,从而显著提升了大型语言模型使用外部工具的能力。


📄 打开原文 PDF