菜单

🤖 系统
📄 Abstract - SpeContext: Enabling Efficient Long-context Reasoning with Speculative Context Sparsity in LLMs

In this paper, we point out that the objective of the retrieval algorithms is to align with the LLM, which is similar to the objective of knowledge distillation in LLMs. We analyze the similarity in information focus between the distilled language model(DLM) and the original LLM from the perspective of information theory, and thus propose a novel paradigm that leverages a DLM as the retrieval algorithm. Based on the insight, we present SpeContext, an algorithm and system co-design for long-context reasoning. (1) At the algorithm level, SpeContext proposes lightweight retrieval head based on the head-level attention weights of DLM, achieving > 90% parameters reduction by pruning the redundancy. (2) At the system level, SpeContext designs an asynchronous prefetch dataflow via the elastic loading strategy, effectively overlapping KV cache retrieval with the LLM computation. (3) At the compilation level, SpeContext constructs the theoretical memory model and implements an adaptive memory management system to achieve acceleration by maximizing GPU memory utilization. We deploy and evaluate SpeContext in two resourceconstrained environments, cloud and edge. Extensive experiments show that, compared with the Huggingface framework, SpeContext achieves up to 24.89x throughput improvement in cloud and 10.06x speedup in edge with negligible accuracy loss, pushing the Pareto frontier of accuracy and throughput.

顶级标签: llm systems model training
详细标签: long-context reasoning knowledge distillation retrieval algorithms kv cache memory optimization 或 搜索:

SpeContext:利用大语言模型中的推测性上下文稀疏性实现高效长上下文推理 / SpeContext: Enabling Efficient Long-context Reasoning with Speculative Context Sparsity in LLMs


1️⃣ 一句话总结

这篇论文提出了一种名为SpeContext的新方法,它通过使用一个轻量化的“蒸馏”模型来智能筛选长文本中的关键信息,并结合软硬件协同优化,在几乎不影响大模型回答准确性的前提下,大幅提升了长文本处理的速度和效率。


📄 打开原文 PDF