菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-09
📄 Abstract - IIB-LPO: Latent Policy Optimization via Iterative Information Bottleneck

Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) for Large Language Model (LLM) reasoning have been hindered by a persistent challenge: exploration collapse. The semantic homogeneity of random rollouts often traps models in narrow, over-optimized behaviors. While existing methods leverage policy entropy to encourage exploration, they face inherent limitations. Global entropy regularization is susceptible to reward hacking, which can induce meaningless verbosity, whereas local token-selective updates struggle with the strong inductive bias of pre-trained models. To address this, we propose Latent Policy Optimization via Iterative Information Bottleneck (IIB-LPO), a novel approach that shifts exploration from statistical perturbation of token distributions to topological branching of reasoning trajectories. IIB-LPO triggers latent branching at high-entropy states to diversify reasoning paths and employs the Information Bottleneck principle both as a trajectory filter and a self-reward mechanism, ensuring concise and informative exploration. Empirical results across four mathematical reasoning benchmarks demonstrate that IIB-LPO achieves state-of-the-art performance, surpassing prior methods by margins of up to 5.3% in accuracy and 7.4% in diversity metrics.

顶级标签: llm reinforcement learning model training
详细标签: structured exploration information bottleneck latent policy optimization reasoning diversity rlvr 或 搜索:

通过信息瓶颈潜在策略优化实现结构化探索:解决LLM推理中的探索崩溃问题 / IIB-LPO: Latent Policy Optimization via Iterative Information Bottleneck


1️⃣ 一句话总结

本文提出了一种名为I²B-LPO的新方法,通过熵驱动的潜在分支和信息瓶颈正则化,解决了大型语言模型在强化学习与可验证奖励(RLVR)推理任务中面临的探索崩溃问题,在保持推理准确性的同时显著提升了输出路径的语义多样性。


2️⃣ 论文创新点

1. 范式转变:从统计扰动到推理轨迹的拓扑分叉

2. 熵驱动的潜在分支

3. 双重用途的信息瓶颈

4. 两阶段I²B-LPO框架

5. 结构化潜在注入(PSA)


3️⃣ 主要结果与价值

结果亮点

实际价值


4️⃣ 术语表

源自 arXiv: 2601.05870