菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - Data Distribution Matters: A Data-Centric Perspective on Context Compression for Large Language Model

The deployment of Large Language Models (LLMs) in long-context scenarios is hindered by computational inefficiency and significant information redundancy. Although recent advancements have widely adopted context compression to address these challenges, existing research only focus on model-side improvements, the impact of the data distribution itself on context compression remains largely unexplored. To bridge this gap, we are the first to adopt a data-centric perspective to systematically investigate how data distribution impacts compression quality, including two dimensions: input data and intrinsic data (i.e., the model's internal pretrained knowledge). We evaluate the semantic integrity of compressed representations using an autoencoder-based framework to systematically investigate it. Our experimental results reveal that: (1) encoder-measured input entropy negatively correlates with compression quality, while decoder-measured entropy shows no significant relationship under a frozen-decoder setting; and (2) the gap between intrinsic data of the encoder and decoder significantly diminishes compression gains, which is hard to mitigate. Based on these findings, we further present practical guidelines to optimize compression gains.

顶级标签: llm model training data
详细标签: context compression data distribution autoencoder semantic integrity entropy 或 搜索:

数据分布至关重要:从数据中心的视角看大语言模型的上下文压缩 / Data Distribution Matters: A Data-Centric Perspective on Context Compression for Large Language Model


1️⃣ 一句话总结

这篇论文首次从数据本身的角度研究发现,输入数据的复杂程度以及模型内部知识的不匹配,是影响大语言模型压缩长文本效率的关键因素,并据此提出了优化压缩效果的实用建议。

源自 arXiv: 2602.01778