菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-12
📄 Abstract - BLooP: Zero-Shot Abstractive Summarization using Large Language Models with Bigram Lookahead Promotion

Abstractive summarization requires models to generate summaries that convey information in the source document. While large language models can generate summaries without fine-tuning, they often miss key details and include extraneous information. We propose BLooP (Bigram Lookahead Promotion), a simple training-free decoding intervention that encourages large language models (LLMs) to generate tokens that form bigrams from the source document. BLooP operates through a hash table lookup at each decoding step, requiring no training, fine-tuning, or model modification. We demonstrate improvements in ROUGE and BARTScore for Llama-3.1-8B-Instruct, Mistral-Nemo-Instruct-2407, and Gemma-2-9b-it on CNN/DM, CCSum, Multi-News, and SciTLDR. Human evaluation shows that BLooP significantly improves faithfulness without reducing readability. We make the code available at this https URL

顶级标签: llm natural language processing model evaluation
详细标签: abstractive summarization decoding intervention faithfulness training-free bigram promotion 或 搜索:

BLooP:利用大语言模型和双词前瞻提升的零样本抽象摘要生成 / BLooP: Zero-Shot Abstractive Summarization using Large Language Models with Bigram Lookahead Promotion


1️⃣ 一句话总结

这篇论文提出了一种名为BLooP的无需训练的简单解码方法,通过引导大语言模型在生成摘要时优先选择原文中出现的双词组合,有效提升了摘要的准确性和信息保真度,同时保持了良好的可读性。

源自 arXiv: 2603.11415