菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-26
📄 Abstract - Beyond identifiability: Learning causal representations with few environments and finite samples

We provide explicit, finite-sample guarantees for learning causal representations from data with a sublinear number of environments. Causal representation learning seeks to provide a rigourous foundation for the general representation learning problem by bridging causal models with latent factor models in order to learn interpretable representations with causal semantics. Despite a blossoming theory of identifiability in causal representation learning, estimation and finite-sample bounds are less well understood. We show that causal representations can be learned with only a logarithmic number of unknown, multi-node interventions, and that the intervention targets need not be carefully designed in advance. Through a careful perturbation analysis, we provide a new analysis of this problem that guarantees consistent recovery of (a) the latent causal graph, (b) the mixing matrix and representations, and (c) \emph{unknown} intervention targets.

顶级标签: theory machine learning model training
详细标签: causal representation learning identifiability finite-sample analysis intervention recovery latent causal graph 或 搜索:

超越可识别性:在有限环境和样本下学习因果表征 / Beyond identifiability: Learning causal representations with few environments and finite samples


1️⃣ 一句话总结

这篇论文证明了,即使只有少量不同的数据环境和有限的样本,我们也能有效地从数据中学习到具有因果解释的隐藏变量及其关系图,而无需事先精确知道数据是如何被干预的。

源自 arXiv: 2603.25796