菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-13
📄 Abstract - Parallel Context-of-Experts Decoding for Retrieval Augmented Generation

Retrieval Augmented Generation faces a trade-off: concatenating documents in a long prompt enables multi-document reasoning but creates prefill bottlenecks, while encoding document KV caches separately offers speed but breaks cross-document interaction. We propose Parallel Context-of-Experts Decoding (Pced), a training-free framework that shifts evidence aggregation from the attention mechanism to the decoding. Pced treats retrieved documents as isolated "experts", synchronizing their predictions via a novel retrieval-aware contrastive decoding rule that weighs expert logits against the model prior. This approach recovers cross-document reasoning capabilities without constructing a shared attention across documents.

顶级标签: llm natural language processing model evaluation
详细标签: retrieval augmented generation decoding algorithm contrastive decoding efficiency multi-document reasoning 或 搜索:

用于检索增强生成的并行专家上下文解码 / Parallel Context-of-Experts Decoding for Retrieval Augmented Generation


1️⃣ 一句话总结

这篇论文提出了一种无需训练的新方法,通过将检索到的文档视为独立的‘专家’并在解码阶段同步它们的预测,巧妙地解决了检索增强生成中多文档推理速度与效果难以兼顾的难题。

源自 arXiv: 2601.08670