菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - CausalVAE as a Plug-in for World Models: Towards Reliable Counterfactual Dynamics

In this work, CausalVAE is introduced as a plug-in structural module for latent world models and is attached to diverse encoder-transition backbones. Across the reported benchmarks, competitive factual prediction is preserved and intervention-aware counterfactual retrieval is improved after the plug-in is added, suggesting stronger robustness under distribution shift and interventions. The largest gains are observed on the Physics benchmark: when averaged over 8 paired baselines, CF-H@1 is improved by +102.5%. In a representative GNN-NLL setting on Physics, CF-H@1 is increased from 11.0 to 41.0 (+272.7%). Through causal analysis, learned structural dependencies are shown to recover meaningful first-order physical interaction trends, supporting the interpretability of the learned latent causal structure.

顶级标签: machine learning model training theory
详细标签: causal representation learning world models counterfactual dynamics latent variable models distribution shift robustness 或 搜索:

CausalVAE作为世界模型的插件:迈向可靠的反事实动态 / CausalVAE as a Plug-in for World Models: Towards Reliable Counterfactual Dynamics


1️⃣ 一句话总结

这篇论文提出将CausalVAE作为一个插件模块,集成到各种世界模型中,在保持原有预测能力的同时,显著提升了模型在干预下的反事实推理鲁棒性和可解释性,尤其在物理场景中效果突出。

源自 arXiv: 2604.07712