RARE:面向高相似度语料库的冗余感知检索评估框架 / RARE: Redundancy-Aware Retrieval Evaluation Framework for High-Similarity Corpora
1️⃣ 一句话总结
该论文提出了RARE评估框架,通过将文档拆解为原子事实来精确追踪信息冗余,并利用改进的LLM数据生成方法,解决了现有问答基准在金融、法律等高重复性语料中无法真实评估检索器性能的问题。
Existing QA benchmarks typically assume distinct documents with minimal overlap, yet real-world retrieval-augmented generation (RAG) systems operate on corpora such as financial reports, legal codes, and patents, where information is highly redundant and documents exhibit strong inter-document similarity. This mismatch undermines evaluation validity: retrievers can be unfairly undervalued even when they retrieve documents that provide sufficient evidence, because redundancy across documents is not accounted for in evaluation. On the other hand, retrievers that perform well on standard benchmarks often generalize poorly to real-world corpora with highly similar and redundant documents. We present RARE (Redundancy-Aware Retrieval Evaluation), a framework for constructing realistic benchmarks by (i) decomposing documents into atomic facts to enable precise redundancy tracking and (ii) enhancing LLM-based data generation with CRRF. RAG benchmark data usually requires multiple quality criteria, but LLMs often yield trivial outputs. CRRF scores criteria separately and fuses decisions by rank, improving the reliability of generated data. Applying RARE to Finance, Legal, and Patent corpora, we introduce RedQA, where a strong retriever baseline drops from 66.4% PerfRecall@10 on 4-hop General-Wiki to 5.0-27.9% PerfRecall@10 at 4-hop depth, revealing robustness gaps that current benchmarks fail to capture. RARE enables practitioners to build domain-specific RAG evaluations that faithfully reflect real-world deployment conditions.
RARE:面向高相似度语料库的冗余感知检索评估框架 / RARE: Redundancy-Aware Retrieval Evaluation Framework for High-Similarity Corpora
该论文提出了RARE评估框架,通过将文档拆解为原子事实来精确追踪信息冗余,并利用改进的LLM数据生成方法,解决了现有问答基准在金融、法律等高重复性语料中无法真实评估检索器性能的问题。
源自 arXiv: 2604.19047