面向归因与事实核查的用户中心化证据排序 / User-Centric Evidence Ranking for Attribution and Fact Verification
1️⃣ 一句话总结
本文提出了一种名为‘证据排序’的新任务,通过将最充分的信息优先展示在列表前列来优化证据呈现方式,从而在保证所有证据可用的前提下,显著减少了用户核查事实时的阅读负担并提升了验证效果。
Attribution and fact verification are critical challenges in natural language processing for assessing information reliability. While automated systems and Large Language Models (LLMs) aim to retrieve and select concise evidence to support or refute claims, they often present users with either insufficient or overly redundant information, leading to inefficient and error-prone verification. To address this, we propose Evidence Ranking, a novel task that prioritizes presenting sufficient information as early as possible in a ranked list. This minimizes user reading effort while still making all available evidence accessible for sequential verification. We compare two approaches for the new ranking task: one-shot ranking and incremental ranking. We introduce a new evaluation framework, inspired by information retrieval metrics, and construct a unified benchmark by aggregating existing fact verification datasets. Extensive experiments with diverse models show that incremental ranking strategies better capture complementary evidence and that LLM-based methods outperform shallower baselines, while still facing challenges in balancing sufficiency and redundancy. Compared to evidence selection, we conduct a controlled user study and demonstrate that evidence ranking both reduces reading effort and improves verification. This work provides a foundational step toward more interpretable, efficient, and user-aligned information verification systems.
面向归因与事实核查的用户中心化证据排序 / User-Centric Evidence Ranking for Attribution and Fact Verification
本文提出了一种名为‘证据排序’的新任务,通过将最充分的信息优先展示在列表前列来优化证据呈现方式,从而在保证所有证据可用的前提下,显著减少了用户核查事实时的阅读负担并提升了验证效果。
源自 arXiv: 2601.21387