菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - Can David Beat Goliath? On Multi-Hop Reasoning with Resource-Constrained Agents

While reinforcement learning (RL) has empowered multi-turn reasoning agents with retrieval and tools, existing successes largely depend on extensive on-policy rollouts in high-cost, high-accuracy regimes. Under realistic resource constraints that cannot support large models or dense explorations, however, small language model agents fall into a low-cost, low-accuracy regime, where limited rollout budgets lead to sparse exploration, sparse credit assignment, and unstable training. In this work, we challenge this trade-off and show that small language models can achieve strong multi-hop reasoning under resource constraints. We introduce DAVID-GRPO, a budget-efficient RL framework that (i) stabilizes early learning with minimal supervision, (ii) assigns retrieval credit based on evidence recall, and (iii) improves exploration by resampling truncated near-miss trajectories. Evaluated on agents up to 1.5B parameters trained on only four RTX 3090 GPUs, DAVID-GRPO consistently outperforms prior RL methods designed for large-scale settings on six multi-hop QA benchmarks. These results show that with the right inductive biases, small agents can achieve low training cost with high accuracy.

顶级标签: agents model training reinforcement learning
详细标签: multi-hop reasoning resource-constrained agents retrieval credit assignment exploration resampling small language models 或 搜索:

大卫能战胜歌利亚吗?论资源受限智能体的多跳推理 / Can David Beat Goliath? On Multi-Hop Reasoning with Resource-Constrained Agents


1️⃣ 一句话总结

这篇论文提出了一种名为DAVID-GRPO的高效强化学习框架,它通过稳定早期学习、优化检索信用分配和改进探索策略,成功让参数规模较小、计算资源有限的AI智能体在复杂的多步推理任务上取得了高精度表现,打破了‘低成本必然低精度’的固有困境。

源自 arXiv: 2601.21699