菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-22
📄 Abstract - Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies

Existing reinforcement learning (RL) approaches treat large language models (LLMs) as a single unified policy, overlooking their internal mechanisms. Understanding how policy evolves across layers and modules is therefore crucial for enabling more targeted optimization and raveling out complex reasoning mechanisms. In this paper, we decompose the language model policy by leveraging the intrinsic split of the Transformer residual stream and the equivalence between the composition of hidden states with the unembedding matrix and the resulting samplable policy. This decomposition reveals Internal Layer Policies, corresponding to contributions from individual layers, and Internal Modular Policies, which align with the self-attention and feed-forward network (FFN) components within each layer. By analyzing the entropy of internal policy, we find that: (a) Early layers keep high entropy for exploration, top layers converge to near-zero entropy for refinement, with convergence patterns varying across model series. (b) LLama's prediction space rapidly converges in the final layer, whereas Qwen-series models, especially Qwen3, exhibit a more human-like, progressively structured reasoning pattern. Motivated by these findings, we propose Bottom-up Policy Optimization (BuPO), a novel RL paradigm that directly optimizes the internal layer policy during early training. By aligning training objective at lower layer, BuPO reconstructs foundational reasoning capabilities and achieves superior performance. Extensive experiments on complex reasoning benchmarks demonstrates the effectiveness of our method. Our code is available at this https URL.

顶级标签: llm reinforcement learning model training
详细标签: policy decomposition transformer analysis internal policies reasoning mechanisms layer optimization 或 搜索:

自底向上策略优化:你的语言模型策略中潜藏着内部策略 / Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies


1️⃣ 一句话总结

这篇论文发现大语言模型内部不同层和模块的策略功能各异,并据此提出了一种通过直接优化底层内部策略来提升模型复杂推理能力的新训练方法。

源自 arXiv: 2512.19673