菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - Cross-Domain Offline Policy Adaptation via Selective Transition Correction

It remains a critical challenge to adapt policies across domains with mismatched dynamics in reinforcement learning (RL). In this paper, we study cross-domain offline RL, where an offline dataset from another similar source domain can be accessed to enhance policy learning upon a target domain dataset. Directly merging the two datasets may lead to suboptimal performance due to potential dynamics mismatches. Existing approaches typically mitigate this issue through source domain transition filtering or reward modification, which, however, may lead to insufficient exploitation of the valuable source domain data. Instead, we propose to modify the source domain data into the target domain data. To that end, we leverage an inverse policy model and a reward model to correct the actions and rewards of source transitions, explicitly achieving alignment with the target dynamics. Since limited data may result in inaccurate model training, we further employ a forward dynamics model to retain corrected samples that better match the target dynamics than the original transitions. Consequently, we propose the Selective Transition Correction (STC) algorithm, which enables reliable usage of source domain data for policy adaptation. Experiments on various environments with dynamics shifts demonstrate that STC achieves superior performance against existing baselines.

顶级标签: reinforcement learning machine learning agents
详细标签: offline rl domain adaptation dynamics mismatch transition correction policy adaptation 或 搜索:

通过选择性转移修正实现跨领域离线策略适应 / Cross-Domain Offline Policy Adaptation via Selective Transition Correction


1️⃣ 一句话总结

这篇论文提出了一种名为‘选择性转移修正’的新算法,它通过智能地修正和筛选来自相似但动态特性不同的源领域数据,让智能体能够更安全、有效地利用这些数据来提升在目标领域的离线强化学习性能。

源自 arXiv: 2602.05776