菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-14
📄 Abstract - Does RAG Know When Retrieval Is Wrong? Diagnosing Context Compliance under Knowledge Conflict

The Context-Compliance Regime in Retrieval-Augmented Generation (RAG) occurs when retrieved context dominates the final answer even when it conflicts with the model's parametric knowledge. Accuracy alone does not reveal how retrieved context causally shapes answers under such conflict. We introduce Context-Driven Decomposition (CDD), a belief-decomposition probe that operates at inference time and serves as an intervention mechanism for controlled retrieval conflict. Across Epi-Scale stress tests, TruthfulQA misconception injection, and cross- model reruns, CDD exposes three patterns. P1: context compliance is measurable in an upper-bound adversarial setting, where Standard RAG reaches 15.0% accuracy on TruthfulQA misconception injection (N=500). P2: adversarial accuracy gains transfer across model families: CDD improves accuracy on Gemini-2.5-Flash and on Claude Haiku/Sonnet/Opus, but rationale-answer causal coupling does not transfer. CDD reaches 64.1% mistake- injection causal sensitivity on Gemini-2.5-Flash, while sensitivities for all three Claude variants fall in the [-3%, +7%] range, suggesting that the Claude-side accuracy gains operate through a mechanism distinct from the explicit conflict-resolution trace. P3: explicit conflict decomposition improves robustness under temporal drift and noisy distractors, with CDD reaching 71.3% on temporal shifts and 69.9% on distractor evidence on the full Epi-Scale adversarial benchmark. These three patterns identify context-compliance as a structural axis along which standard RAG can be probed and intervened on, distinct from retrieval-quality or single-method robustness questions, and motivate releasing Epi-Scale for systematic study across model families and retrieval pipelines.

顶级标签: llm systems
详细标签: retrieval-augmented generation knowledge conflict context compliance belief decomposition adversarial stress test 或 搜索:

检索增强生成何时能察觉检索错误?——知识冲突下的上下文合规性诊断 / Does RAG Know When Retrieval Is Wrong? Diagnosing Context Compliance under Knowledge Conflict


1️⃣ 一句话总结

本文提出一种名为“上下文驱动分解”的推理时探测方法,能够揭示检索增强生成模型在检索结果与自身知识冲突时,是否盲目遵循错误上下文,并通过实验证明该方法能显著提升模型在对抗性测试中的准确率与鲁棒性。

源自 arXiv: 2605.14473