菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-28
📄 Abstract - LLM-ReSum: A Framework for LLM Reflective Summarization through Self-Evaluation

Reliable evaluation of large language model (LLM)-generated summaries remains an open challenge, particularly across heterogeneous domains and document lengths. We conduct a comprehensive meta-evaluation of 14 automatic summarization metrics and LLM-based evaluators across seven datasets spanning five domains, covering documents from short news articles to long scientific, governmental, and legal texts (2K-27K words) with over 1,500 human-annotated summaries. Our results show that traditional lexical overlap metrics (e.g., ROUGE, BLEU) exhibit weak or negative correlation with human judgments, while task-specific neural metrics and LLM-based evaluators achieve substantially higher alignment, especially for linguistic quality assessment. Leveraging these findings, we propose LLM-ReSum, a self-reflective summarization framework that integrates LLM-based evaluation and generation in a closed feedback loop without model finetuning. Across three domains, LLM-ReSum improves low-quality summaries by up to 33% in factual accuracy and 39% in coverage, with human evaluators preferring refined summaries in 89% of cases. We additionally introduce PatentSumEval, a new human-annotated benchmark for legal document summarization comprising 180 expert-evaluated summaries. All code and datasets will be released in GitHub.

顶级标签: llm natural language processing evaluation
详细标签: summarization self-evaluation benchmark legal document meta-evaluation 或 搜索:

LLM-ReSum:一种通过自我评估实现大语言模型反思式摘要的框架 / LLM-ReSum: A Framework for LLM Reflective Summarization through Self-Evaluation


1️⃣ 一句话总结

本研究通过系统评估14种摘要评价指标,发现传统指标与人工判断相关性弱,而基于大语言模型的评估器更准确,并据此提出LLM-ReSum框架——让模型在无需微调的情况下,通过自我评估和反馈循环不断改进生成的摘要,在事实准确性和内容覆盖率上分别提升高达33%和39%。

源自 arXiv: 2604.25665