世界行动验证器:通过前向-逆向不对称实现自我改进的世界模型 / World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry
1️⃣ 一句话总结
这篇论文提出了一种名为‘世界行动验证器’的新方法,通过将复杂的未来状态预测任务分解为两个更简单的验证问题,并利用数据与特征维度上的不对称性,让AI世界模型能够自我检测预测错误并持续改进,从而在多种机器人任务中显著提升了学习效率和最终表现。
General-purpose world models promise scalable policy evaluation, optimization, and planning, yet achieving the required level of robustness remains challenging. Unlike policy learning, which primarily focuses on optimal actions, a world model must be reliable over a much broader range of suboptimal actions, which are often insufficiently covered by action-labeled interaction data. To address this challenge, we propose World Action Verifier (WAV), a framework that enables world models to identify their own prediction errors and self-improve. The key idea is to decompose action-conditioned state prediction into two factors -- state plausibility and action reachability -- and verify each separately. We show that these verification problems can be substantially easier than predicting future states due to two underlying asymmetries: the broader availability of action-free data and the lower dimensionality of action-relevant features. Leveraging these asymmetries, we augment a world model with (i) a diverse subgoal generator obtained from video corpora and (ii) a sparse inverse model that infers actions from a subset of state features. By enforcing cycle consistency among generated subgoals, inferred actions, and forward rollouts, WAV provides an effective verification mechanism in under-explored regimes, where existing methods typically fail. Across nine tasks spanning MiniGrid, RoboMimic, and ManiSkill, our method achieves 2x higher sample efficiency while improving downstream policy performance by 18%.
世界行动验证器:通过前向-逆向不对称实现自我改进的世界模型 / World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry
这篇论文提出了一种名为‘世界行动验证器’的新方法,通过将复杂的未来状态预测任务分解为两个更简单的验证问题,并利用数据与特征维度上的不对称性,让AI世界模型能够自我检测预测错误并持续改进,从而在多种机器人任务中显著提升了学习效率和最终表现。
源自 arXiv: 2604.01985