菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Are Dilemmas and Conflicts in LLM Alignment Solvable? A View from Priority Graph

As Large Language Models (LLMs) become more powerful and autonomous, they increasingly face conflicts and dilemmas in many scenarios. We first summarize and taxonomize these diverse conflicts. Then, we model the LLM's preferences to make different choices as a priority graph, where instructions and values are nodes, and the edges represent context-specific priorities determined by the model's output distribution. This graph reveals that a unified stable LLM alignment is very challenging, because the graph is neither static nor necessarily consistent in different contexts. Besides, it also reveals a potential vulnerability: priority hacking, where adversaries can craft deceptive contexts to manipulate the graph and bypass safety alignments. To counter this, we propose a runtime verification mechanism, enabling LLMs to query external sources to ground their context and resist manipulation. While this approach enhances robustness, we also acknowledge that many ethical and value dilemmas are philosophically irreducible, posing a long-term, open challenge for the future of AI alignment.

顶级标签: llm agents theory
详细标签: ai alignment priority graph safety runtime verification value conflicts 或 搜索:

大语言模型对齐中的困境与冲突可解吗?基于优先级图的视角 / Are Dilemmas and Conflicts in LLM Alignment Solvable? A View from Priority Graph


1️⃣ 一句话总结

这篇论文通过构建一个动态的‘优先级图’模型,揭示了大语言模型在处理不同指令和价值冲突时难以实现稳定统一的对齐,并指出其易受‘优先级劫持’攻击的脆弱性,为此提出了一种运行时验证的防御方法,但同时也承认许多伦理困境在哲学上无法彻底解决,是AI对齐面临的长期挑战。

源自 arXiv: 2603.15527