菜单

🤖 系统
📄 Abstract - Optimizing Diversity and Quality through Base-Aligned Model Collaboration

Alignment has greatly improved large language models (LLMs)' output quality at the cost of diversity, yielding highly similar outputs across generations. We propose Base-Aligned Model Collaboration (BACo), an inference-time token-level model collaboration framework that dynamically combines a base LLM with its aligned counterpart to optimize diversity and quality. Inspired by prior work (Fei et al., 2025), BACo employs routing strategies that determine, at each token, from which model to decode based on next-token prediction uncertainty and predicted contents' semantic role. Prior diversity-promoting methods, such as retraining, prompt engineering, and multi-sampling methods, improve diversity but often degrade quality or require costly decoding or post-training. In contrast, BACo achieves both high diversity and quality post hoc within a single pass, while offering strong controllability. We explore a family of routing strategies, across three open-ended generation tasks and 13 metrics covering diversity and quality, BACo consistently surpasses state-of-the-art inference-time baselines. With our best router, BACo achieves a 21.3% joint improvement in diversity and quality. Human evaluations also mirror these improvements. The results suggest that collaboration between base and aligned models can optimize and control diversity and quality.

顶级标签: llm model training model evaluation
详细标签: model collaboration inference-time optimization diversity-quality tradeoff token-level routing alignment 或 搜索:

📄 论文总结

通过基础对齐模型协作优化多样性与质量 / Optimizing Diversity and Quality through Base-Aligned Model Collaboration


1️⃣ 一句话总结

这篇论文提出了一种名为BACo的新方法,通过在推理时动态结合基础模型和对齐模型来生成文本,使得大语言模型在保持高质量输出的同时显著提升了回答的多样性,解决了传统方法难以兼顾这两者的问题。


📄 打开原文 PDF