菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-04
📄 Abstract - Depth-Wise Emergence of Prediction-Centric Geometry in Large Language Models

We show that decoder-only large language models exhibit a depth-wise transition from context-processing to prediction-forming phases of computation accompanied by a reorganization of representational geometry. Using a unified framework combining geometric analysis with mechanistic intervention, we demonstrate that late-layer representations implement a structured geometric code that enables selective causal control over token prediction. Specifically, angular organization of the representation geometry parametrizes prediction distributional similarity, while representation norms encode context-specific information that does not determine prediction. Together, these results provide a mechanistic-geometric account of the dynamics of transforming context into predictions in LLMs.

顶级标签: llm theory model evaluation
详细标签: representational geometry mechanistic interpretability prediction formation transformer dynamics causal abstraction 或 搜索:

大语言模型中预测中心几何结构的深度涌现 / Depth-Wise Emergence of Prediction-Centric Geometry in Large Language Models


1️⃣ 一句话总结

这篇论文发现,大语言模型在处理信息时,其内部表征的几何结构会随着网络层数加深而发生转变,从处理上下文信息逐步演变为形成预测,其中角度关系决定了预测的相似性,而向量长度则编码了与预测无关的上下文细节。

源自 arXiv: 2602.04931