📄
Abstract - HiPER: Hierarchical Reinforcement Learning with Explicit Credit Assignment for Large Language Model Agents
Training LLMs as interactive agents for multi-turn decision-making remains challenging, particularly in long-horizon tasks with sparse and delayed rewards, where agents must execute extended sequences of actions before receiving meaningful feedback. Most existing reinforcement learning (RL) approaches model LLM agents as flat policies operating at a single time scale, selecting one action at each turn. In sparse-reward settings, such flat policies must propagate credit across the entire trajectory without explicit temporal abstraction, which often leads to unstable optimization and inefficient credit assignment. We propose HiPER, a novel Hierarchical Plan-Execute RL framework that explicitly separates high-level planning from low-level execution. HiPER factorizes the policy into a high-level planner that proposes subgoals and a low-level executor that carries them out over multiple action steps. To align optimization with this structure, we introduce a key technique called hierarchical advantage estimation (HAE), which carefully assigns credit at both the planning and execution levels. By aggregating returns over the execution of each subgoal and coordinating updates across the two levels, HAE provides an unbiased gradient estimator and provably reduces variance compared to flat generalized advantage estimation. Empirically, HiPER achieves state-of-the-art performance on challenging interactive benchmarks, reaching 97.4\% success on ALFWorld and 83.3\% on WebShop with Qwen2.5-7B-Instruct (+6.6\% and +8.3\% over the best prior method), with especially large gains on long-horizon tasks requiring multiple dependent subtasks. These results highlight the importance of explicit hierarchical decomposition for scalable RL training of multi-turn LLM agents.
HiPER:面向大语言模型智能体的显式信用分配分层强化学习 /
HiPER: Hierarchical Reinforcement Learning with Explicit Credit Assignment for Large Language Model Agents
1️⃣ 一句话总结
这篇论文提出了一种名为HiPER的分层强化学习新框架,通过将智能体的决策过程明确分解为‘高层规划’和‘底层执行’两个层级,并设计了一种创新的信用分配方法,有效解决了大语言模型在需要多轮决策、奖励稀疏的复杂任务中训练不稳定和效率低下的问题,在多个交互式基准测试中取得了领先的性能。