LongSumEval:基于问答的长文档摘要评估与反馈驱动改进方法 / LongSumEval: Question-Answering Based Evaluation and Feedback-Driven Refinement for Long Document Summarization
1️⃣ 一句话总结
本论文提出一个统一框架,通过问答形式来评估长文档摘要的质量,不仅能给出分数,还能提供具体改进建议,帮助模型自动修正错误,从而让摘要更准确、更可靠。
Evaluating long document summaries remains the primary bottleneck in summarization research. Existing metrics correlate weakly with human judgments and produce aggregate scores without explaining deficiencies or guiding improvement, preventing effective refinement in applications requiring verifiable accuracy. We introduce LongSumEval, a unified framework bridging evaluation and generation through structured question-answering feedback. The framework operationalizes summary quality as answerability and factual alignment of question-answer pairs, generating interpretable scores and actionable feedback that identifies coverage gaps and factual inconsistencies. This resolves the misalignment where evaluation operates independently of generation objectives. Meta-evaluation of our QA-based evaluation module across seven benchmarks demonstrates substantially stronger agreement with human judgments compared to established metrics. Structured feedback enables significant quality improvements through self-refinement without retraining. By demonstrating that evaluation feedback can serve as executable instructions for generation, this work establishes a generalizable paradigm for aligning assessment with improvement, with direct implications for controllable text generation requiring verifiable accuracy and transparent quality control. All code and datasets will be released in GitHub for reproducibility.
LongSumEval:基于问答的长文档摘要评估与反馈驱动改进方法 / LongSumEval: Question-Answering Based Evaluation and Feedback-Driven Refinement for Long Document Summarization
本论文提出一个统一框架,通过问答形式来评估长文档摘要的质量,不仅能给出分数,还能提供具体改进建议,帮助模型自动修正错误,从而让摘要更准确、更可靠。
源自 arXiv: 2604.25130