菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Preconditioned One-Step Generative Modeling for Bayesian Inverse Problems in Function Spaces

We propose a machine-learning algorithm for Bayesian inverse problems in the function-space regime based on one-step generative transport. Building on the Mean Flows, we learn a fully conditional amortized sampler with a neural-operator backbone that maps a reference Gaussian noise to approximate posterior samples. We show that while white-noise references may be admissible at fixed discretization, they become incompatible with the function-space limit, leading to instability in inference for Bayesian problems arising from PDEs. To address this issue, we adopt a prior-aligned anisotropic Gaussian reference distribution and establish the Lipschitz regularity of the resulting transport. Our method is not distilled from MCMC: training relies only on prior samples and simulated partial and noisy observations. Once trained, it generates a $64\times64$ posterior sample in $\sim 10^{-3}$s, avoiding the repeated PDE solves of MCMC while matching key posterior summaries.

顶级标签: machine learning theory model training
详细标签: bayesian inverse problems generative modeling function spaces neural operators transport maps 或 搜索:

函数空间贝叶斯逆问题的预条件一步生成建模 / Preconditioned One-Step Generative Modeling for Bayesian Inverse Problems in Function Spaces


1️⃣ 一句话总结

这篇论文提出了一种基于一步生成式传输的机器学习方法,用于快速解决函数空间中的贝叶斯逆问题,通过使用与先验对齐的参考分布来确保稳定性,并能在训练后以极快的速度生成高质量的后验样本,从而避免了传统马尔可夫链蒙特卡洛方法中重复求解偏微分方程的高昂计算成本。

源自 arXiv: 2603.14798