菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - HIPO: Instruction Hierarchy via Constrained Reinforcement Learning

Hierarchical Instruction Following (HIF) refers to the problem of prompting large language models with a priority-ordered stack of instructions. Standard methods like RLHF and DPO typically fail in this problem since they mainly optimize for a single objective, failing to explicitly enforce system prompt compliance. Meanwhile, supervised fine-tuning relies on mimicking filtered, compliant data, which fails to establish the priority asymmetry at the algorithmic level. In this paper, we introduce \textsc{HIPO}, a novel alignment framework that formulates HIF as a Constrained Markov Decision Process. \textsc{HIPO} elevates system prompts from mere input context to strict algorithmic boundaries. Using a primal-dual safe reinforcement learning approach, the algorithm dynamically enforces system prompt compliance as an explicit constraint, maximizing user utility strictly within this feasible region. Extensive evaluations across diverse model architectures (e.g., Qwen, Phi, Llama) demonstrate that \textsc{HIPO} significantly improves both system compliance and user utility. Furthermore, mechanistic analysis reveals that this constrained optimization autonomously drives the model to shift its attention toward long-range system tokens, providing a principled foundation for reliable LLM deployment in complex workflows.

顶级标签: llm model training agents
详细标签: instruction following constrained reinforcement learning alignment system prompt compliance hierarchical control 或 搜索:

HIPO:通过约束强化学习实现指令层级化 / HIPO: Instruction Hierarchy via Constrained Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种名为HIPO的新方法,它通过约束强化学习让大语言模型能更好地遵循一组有优先级的复杂指令,确保核心系统指令得到严格遵守,同时提升对用户指令的响应效果。

源自 arXiv: 2603.16152