菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-11
📄 Abstract - Historical Consensus: Preventing Posterior Collapse via Iterative Selection of Gaussian Mixture Priors

Variational autoencoders (VAEs) frequently suffer from posterior collapse, where latent variables become uninformative and the approximate posterior degenerates to the prior. Recent work has characterized this phenomenon as a phase transition governed by the spectral properties of the data covariance matrix. In this paper, we propose a fundamentally different approach: instead of avoiding collapse through architectural constraints or hyperparameter tuning, we eliminate the possibility of collapse altogether by leveraging the multiplicity of Gaussian mixture model (GMM) clusterings. We introduce Historical Consensus Training, an iterative selection procedure that progressively refines a set of candidate GMM priors through alternating optimization and selection. The key insight is that models trained to satisfy multiple distinct clustering constraints develop a historical barrier -- a region in parameter space that remains stable even when subsequently trained with a single objective. We prove that this barrier excludes the collapsed solution, and demonstrate through extensive experiments on synthetic and real-world datasets that our method achieves non-collapsed representations regardless of decoder variance or regularization strength. Our approach requires no explicit stability conditions (e.g., $\sigma^{\prime 2} < \lambda_{\max}$) and works with arbitrary neural architectures. The code is available at this https URL.

顶级标签: machine learning model training theory
详细标签: variational autoencoders posterior collapse gaussian mixture priors latent variables phase transition 或 搜索:

历史共识:通过迭代选择高斯混合先验来防止后验塌缩 / Historical Consensus: Preventing Posterior Collapse via Iterative Selection of Gaussian Mixture Priors


1️⃣ 一句话总结

这篇论文提出了一种名为‘历史共识训练’的新方法,通过迭代地选择和优化多个高斯混合模型先验,从根本上消除了变分自编码器中常见的后验塌缩问题,使得模型无论解码器方差或正则化强度如何,都能获得有意义的潜在表示。

源自 arXiv: 2603.10935