协同事实核查器:一个基于大型推理模型的人机协作声明验证框架 / Co-FactChecker: A Framework for Human-AI Collaborative Claim Verification Using Large Reasoning Models
1️⃣ 一句话总结
这篇论文提出了一个名为Co-FactChecker的新框架,它通过让专家直接修改AI的推理过程来指导其进行事实核查,从而有效结合了人类的领域知识和AI的快速分析能力,比单纯对话或全自动的方法效果更好。
Professional fact-checkers rely on domain knowledge and deep contextual understanding to verify claims. Large language models (LLMs) and large reasoning models (LRMs) lack such grounding and primarily reason from available evidence alone, creating a mismatch between expert-led and fully automated claim verification. To mitigate this gap, we posit human-AI collaboration as a more promising path forward, where expert feedback, grounded in real-world knowledge and domain expertise, guides the model's reasoning. However, existing LRMs are hard to calibrate to natural language feedback, particularly in a multi-turn interaction setup. We propose Co-FactChecker, a framework for human-AI collaborative claim verification. We introduce a new interaction paradigm that treats the model's thinking trace as a shared scratchpad. Co-FactChecker translates expert feedback into trace-edits that introduce targeted modifications to the trace, sidestepping the shortcomings of dialogue-based interaction. We provide theoretical results showing that trace-editing offers advantages over multi-turn dialogue, and our automatic evaluations demonstrate that Co-FactChecker outperforms existing autonomous and human-AI collaboration approaches. Human evaluations further show that Co-FactChecker is preferred over multi-turn dialogue, producing higher quality reasoning and verdicts along with relatively easier to interpret and more useful thinking traces.
协同事实核查器:一个基于大型推理模型的人机协作声明验证框架 / Co-FactChecker: A Framework for Human-AI Collaborative Claim Verification Using Large Reasoning Models
这篇论文提出了一个名为Co-FactChecker的新框架,它通过让专家直接修改AI的推理过程来指导其进行事实核查,从而有效结合了人类的领域知识和AI的快速分析能力,比单纯对话或全自动的方法效果更好。
源自 arXiv: 2604.13706