智能体系统设计的信息论视角 / An Information Theoretic Perspective on Agentic System Design
1️⃣ 一句话总结
这篇论文提出了一种基于信息论的方法来指导智能体语言模型系统的设计,通过量化压缩模型的信息传递效率,证明了提升压缩模型的规模比提升预测模型规模更有效,能以更低成本实现接近顶级模型的性能。
Agentic language model (LM) systems power modern applications like "Deep Research" and "Claude Code," and leverage multi-LM architectures to overcome context limitations. Beneath their apparent diversity lies a recurring pattern: smaller "compressor" LMs (that can even run locally) distill raw context into compact text that is then consumed by larger "predictor" LMs. Despite their popularity, the design of compressor-predictor systems remains largely ad hoc, with little guidance on how compressor and predictor choices shape downstream performance. In practice, attributing gains to compression versus prediction requires costly, task-specific pairwise sweeps. We argue that these agentic system design questions are, at root, information-theoretic. Viewing the compressor LM as a noisy channel, we introduce a simple estimator of mutual information between the context and its compression to quantify compression quality in a task-independent way. We show that mutual information strongly predicts downstream performance, independent of any specific task. Through an information-theoretic framework, we perform a comprehensive empirical analysis across five datasets and three model families. Results reveal that larger compressors not only are more accurate, but also more token-efficient, conveying more bits of information per token. A 7B Qwen-2.5 compressor, for instance, is $1.6\times$ more accurate, $4.6\times$ more concise, and conveys $5.5\times$ more bits of mutual information per token than its 1.5B sibling. Across datasets, scaling compressors is substantially more effective than scaling predictors, enabling larger on-device compressors to pair with smaller cloud predictors. Applied to a Deep Research system, these principles enable local compressors as small as 3B parameters to recover $99\%$ of frontier-LM accuracy at $26\%$ of API costs.
智能体系统设计的信息论视角 / An Information Theoretic Perspective on Agentic System Design
这篇论文提出了一种基于信息论的方法来指导智能体语言模型系统的设计,通过量化压缩模型的信息传递效率,证明了提升压缩模型的规模比提升预测模型规模更有效,能以更低成本实现接近顶级模型的性能。
源自 arXiv: 2512.21720