菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-05
📄 Abstract - Ensembling Language Models with Sequential Monte Carlo

Practitioners have access to an abundance of language models and prompting strategies for solving many language modeling tasks; yet prior work shows that modeling performance is highly sensitive to both choices. Classical machine learning ensembling techniques offer a principled approach: aggregate predictions from multiple sources to achieve better performance than any single one. However, applying ensembling to language models during decoding is challenging: naively aggregating next-token probabilities yields samples from a locally normalized, biased approximation of the generally intractable ensemble distribution over strings. In this work, we introduce a unified framework for composing $K$ language models into $f$-ensemble distributions for a wide range of functions $f\colon\mathbb{R}_{\geq 0}^{K}\to\mathbb{R}_{\geq 0}$. To sample from these distributions, we propose a byte-level sequential Monte Carlo (SMC) algorithm that operates in a shared character space, enabling ensembles of models with mismatching vocabularies and consistent sampling in the limit. We evaluate a family of $f$-ensembles across prompt and model combinations for various structured text generation tasks, highlighting the benefits of alternative aggregation strategies over traditional probability averaging, and showing that better posterior approximations can yield better ensemble performance.

顶级标签: llm model training machine learning
详细标签: ensembling sequential monte carlo language modeling sampling text generation 或 搜索:

使用序列蒙特卡洛集成语言模型 / Ensembling Language Models with Sequential Monte Carlo


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过序列蒙特卡洛算法将多个语言模型的预测结果智能地组合起来,从而在文本生成任务中获得比单独使用任何一个模型或简单平均概率更好的性能。

源自 arXiv: 2603.05432