生成模型中的依赖保真度与下游推断稳定性 / Dependence Fidelity and Downstream Inference Stability in Generative Models
1️⃣ 一句话总结
这篇论文指出,当前评估生成模型的标准过于关注单变量分布的匹配,而忽略了多变量间的依赖结构,作者提出‘依赖保真度’作为新的评估标准,并证明依赖结构的失真会导致下游推断(如回归分析)得出错误结论。
Recent advances in generative AI have led to increasingly realistic synthetic data, yet evaluation criteria remain focused on marginal distribution matching. While these diagnostics assess local realism, they provide limited insight into whether a generative model preserves the multivariate dependence structures governing downstream inference. We introduce covariance-level dependence fidelity as a practical criterion for evaluating whether a generative distribution preserves joint structure beyond univariate marginals. We establish three core results. First, distributions can match all univariate marginals exactly while exhibiting substantially different dependence structures, demonstrating marginal fidelity alone is insufficient. Second, dependence divergence induces quantitative instability in downstream inference, including sign reversals in regression coefficients despite identical marginal behavior. Third, explicit control of covariance-level dependence divergence ensures stable behavior for dependence-sensitive tasks such as principal component analysis. Synthetic constructions illustrate how dependence preservation failures lead to incorrect conclusions despite identical marginal distributions. These results highlight dependence fidelity as a useful diagnostic for evaluating generative models in dependence-sensitive downstream tasks, with implications for diffusion models and variational autoencoders. These guarantees apply specifically to procedures governed by covariance structure; tasks requiring higher-order dependence such as tail-event estimation require richer criteria.
生成模型中的依赖保真度与下游推断稳定性 / Dependence Fidelity and Downstream Inference Stability in Generative Models
这篇论文指出,当前评估生成模型的标准过于关注单变量分布的匹配,而忽略了多变量间的依赖结构,作者提出‘依赖保真度’作为新的评估标准,并证明依赖结构的失真会导致下游推断(如回归分析)得出错误结论。
源自 arXiv: 2603.17041