实例自适应参数化的摊销变分推断 / Instance-Adaptive Parametrization for Amortized Variational Inference
1️⃣ 一句话总结
这篇论文提出了一种名为IA-VAE的新方法,它通过一个超网络为每个输入数据动态调整编码器参数,从而在保持计算效率的同时,显著提升了变分自编码器的推断精度,有效缓解了传统方法因参数共享导致的性能损失。
Latent variable models, including variational autoencoders (VAE), remain a central tool in modern deep generative modeling due to their scalability and a well-founded probabilistic formulation. These models rely on amortized variational inference to enable efficient posterior approximation, but this efficiency comes at the cost of a shared parametrization, giving rise to the amortization gap. We propose the instance-adaptive variational autoencoder (IA-VAE), an amortized variational inference framework in which a hypernetwork generates input-dependent modulations of a shared encoder. This enables input-specific adaptation of the inference model while preserving the efficiency of a single forward pass. By leveraging instance-specific parameter modulations, the proposed approach can achieve performance comparable to standard encoders with substantially fewer parameters, indicating a more efficient use of model capacity. Experiments on synthetic data, where the true posterior is known, show that IA-VAE yields more accurate posterior approximations and reduces the amortization gap. Similarly, on standard image benchmarks, IA-VAE consistently improves held-out ELBO over baseline VAEs, with statistically significant gains across multiple runs. These results suggest that increasing the flexibility of the inference parametrization through instance-adaptive modulation is a key factor in mitigating amortization-induced suboptimality in deep generative models.
实例自适应参数化的摊销变分推断 / Instance-Adaptive Parametrization for Amortized Variational Inference
这篇论文提出了一种名为IA-VAE的新方法,它通过一个超网络为每个输入数据动态调整编码器参数,从而在保持计算效率的同时,显著提升了变分自编码器的推断精度,有效缓解了传统方法因参数共享导致的性能损失。
源自 arXiv: 2604.06796