菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-23
📄 Abstract - A Comparative Analysis of LLM Memorization at Statistical and Internal Levels: Cross-Model Commonalities and Model-Specific Signatures

Memorization is a fundamental component of intelligence for both humans and LLMs. However, while LLM performance scales rapidly, our understanding of memorization lags. Due to limited access to the pre-training data of LLMs, most previous studies focus on a single model series, leading to isolated observations among series, making it unclear which findings are general or specific. In this study, we collect multiple model series (Pythia, OpenLLaMa, StarCoder, OLMo1/2/3) and analyze their shared or unique memorization behavior at both the statistical and internal levels, connecting individual observations while showing new findings. At the statistical level, we reveal that the memorization rate scales log-linearly with model size, and memorized sequences can be further compressed. Further analysis demonstrated a shared frequency and domain distribution pattern for memorized sequences. However, different models also show individual features under the above observations. At the internal level, we find that LLMs can remove certain injected perturbations, while memorized sequences are more sensitive. By decoding middle layers and attention head ablation, we revealed the general decoding process and shared important heads for memorization. However, the distribution of those important heads differs between families, showing a unique family-level feature. Through bridging various experiments and revealing new findings, this study paves the way for a universal and fundamental understanding of memorization in LLM.

顶级标签: llm model training model evaluation
详细标签: memorization model analysis internal mechanisms scaling laws pre-training 或 搜索:

大语言模型记忆的统计与内部层次对比分析:跨模型共性与模型特定特征 / A Comparative Analysis of LLM Memorization at Statistical and Internal Levels: Cross-Model Commonalities and Model-Specific Signatures


1️⃣ 一句话总结

这篇论文通过对比分析多个大语言模型系列,揭示了模型记忆行为在统计层面(如记忆率随模型规模对数线性增长)和内部机制层面(如存在共享的解码过程和重要注意力头)的普遍规律,同时也发现了不同模型家族特有的记忆特征。

源自 arXiv: 2603.21658