基于指令-策略协同进化的智能体策略优化 / Agentic Policy Optimization via Instruction-Policy Co-Evolution
1️⃣ 一句话总结
这篇论文提出了一个名为INSPO的新框架,它通过让指导AI智能体行动的指令与智能体自身的策略在训练过程中共同进化,从而自动发现更优的指令,显著提升了智能体在复杂任务(如多轮检索和推理)中的表现。
Reinforcement Learning with Verifiable Rewards (RLVR) has advanced the reasoning capability of large language models (LLMs), enabling autonomous agents that can conduct effective multi-turn and tool-integrated reasoning. While instructions serve as the primary protocol for defining agents, RLVR typically relies on static and manually designed instructions. However, those instructions may be suboptimal for the base model, and the optimal instruction may change as the agent's policy improves and explores the interaction with the environment. To bridge the gap, we introduce INSPO, a novel Instruction-Policy co-evolution framework that integrates instruction optimization as a dynamic component of the reinforcement learning (RL) loop. INSPO maintains a dynamic population of instruction candidates that are sampled with questions, where reward signals in RL loops are automatically attributed to each instruction, and low performers are periodically pruned. New instructions are generated and verified through an on-policy reflection mechanism, where an LLM-based optimizer analyzes past experience from a replay buffer and evolves more effective strategies given the current policy. We conduct extensive experiments on multi-turn retrieval and reasoning tasks, demonstrating that INSPO substantially outperforms strong baselines relying on static instructions. INSPO discovers innovative instructions that guide the agent toward more strategic reasoning paths, achieving substantial performance gains with only a marginal increase in computational overhead.
基于指令-策略协同进化的智能体策略优化 / Agentic Policy Optimization via Instruction-Policy Co-Evolution
这篇论文提出了一个名为INSPO的新框架,它通过让指导AI智能体行动的指令与智能体自身的策略在训练过程中共同进化,从而自动发现更优的指令,显著提升了智能体在复杂任务(如多轮检索和推理)中的表现。